nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒07‒15
twenty-six papers chosen by
Sune Karlsson
Orebro University

  1. DYNAMIC SPECIFICATION TESTS FOR DYNAMIC FACTOR MODELS By Gabriele Fiorentini; Enrique Sentana
  2. Quantile regression with clustered data By J.M.C. Santos Silva; Paulo M.D.C. Parente
  3. Methods in empirical economics - a selective review with applications By Hübler, Olaf
  4. "On Robust Properties of the SIML Estimation of Volatility under Micro-market noise and Random Sampling" By Hiroumi Misaki; Naoto Kunitomo
  5. "The SIML Estimation of Integrated Covariance and Hedging Coefficient under Micro-market noise and Random Sampling" By Naoto Kunitomo; Hiroumi Misaki
  6. Distribution-Free Estimation of Zero-Inflated Models with Unobserved Heterogeneity By Rodica Gilles; Seik Kim
  7. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions By Asger Lunde; Anne Floor Brix
  8. Treatment effects and panel data By Lechner, Michael
  9. Testing for Neglected Nonlinearity Using Extreme Learning Machines By KYU LEE SHIN; JIN SEO CHO
  10. Specification Tests of Calibrated Option Pricing Models By Jarrow, Robert; Kwok, Simon
  11. Consistent Estimation of Agent-Based Models by Simulated Minimum Distance. By Jakob Grazzini; Matteo G. Richiardi
  12. Outliers & predicting time series: A comparative study By Ardelean, Vlad; Pleier, Thomas
  13. Endogenous variables in non-linear models with mixed effects: Inconsistence under perfect identification conditions? By Franz Buscha; Anna Conte
  14. Regional Policy Evaluation:Interactive Fixed Effects and Synthetic Controls By Gobillon, Laurent; Magnac, Thierry
  15. Diffusion Indexes with Sparse Loadings By Johannes Tang Kristensen
  16. Doubly Robust Estimation of Causal Effects with Multivalued Treatments By Uysal, S. Derya
  17. Co-summability from linear to non-linear cointegration By Vanessa Berenguer Rico; Jesús Gonzalo
  18. Maximum Entropy Bootstrap Algorithm Enhancements By Hrishikesh D. Vinod
  19. A Measure-Valued Differentiation Approach to Sensitivity Analysis of Quantiles By Bernd Heidergott; Warren Volk-Makarewicz
  20. On the multifractal effects generated by monofractal signals By Dariusz Grech; Grzegorz Pamu{\l}a
  21. Linear Social Interactions Models By Blume, Lawrence E.; Brock, William A.; Durlauf, Steven N.; Jayaraman, Rajshri
  22. Using a Control Function to Resolve the Travel Cost Endogeneity Problem in Recreation Demand Models By Melstrom, Richard; Lupi, Frank
  23. Bayesian analysis of dynamic effects in inefficiency : evidence from the Colombian banking sector By Jorge E. Galán; Helena Veiga; Michael P. Wiper
  24. Mortality : a statistical approach to detect model misspecification By Jean-Charles Croix; Frédéric Planchet; Pierre-Emmanuel Thérond
  25. Problems of Sample-Selection Bias in the Historical Heights Literature: A Theoretical and Econometric Analysis By Bodenhorn, Howard; Guinnane, Timothy W.; Mroz, Thomas A.
  26. Bayesian network as a modelling tool for risk management in agriculture By Svend Rasmussen; Anders L. Madsen; Mogens Lund

  1. By: Gabriele Fiorentini (Università di Firenze); Enrique Sentana (CEMFI, Centro de Estudios Monetarios y Financieros)
    Abstract: We derive computationally simple and intuitive expressions for score tests of neglected serial correlation in common and idiosyncratic factors in dynamic factor models using frequency domain techniques. The implied time domain orthogonality conditions are analogous to the conditions obtained by treating the smoothed estimators of the innovations in the latent factors as if they were observed, but they account for their final estimation errors. Monte Carlo exercises confirm the finite sample reliability and power of our proposed tests. Finally, we illustrate their empirical usefulness in an application that constructs a monthly coincident indicator for the US from four macro series.
    Keywords: Kalman filter, LM tests, Spectral maximum likelihood, Wiener-Kolmogorov filter.
    JEL: C32 C38 C52 C12 C13
    Date: 2013–06
    URL: http://d.repec.org/n?u=RePEc:cmf:wpaper:wp2013_1306&r=ecm
  2. By: J.M.C. Santos Silva; Paulo M.D.C. Parente
    Abstract: We show that the quantile regression estimator is consistent and asymptotically normal when the error terms are correlated within clusters but independent across clusters. A consistent estimator of the covariance matrix of the asymptotic distribution is provided and we propose a specification test capable of detecting the presence of intra-cluster correlation. A small simulation study illustrates the finite sample performance of the test and of the covariance matrix estimator.
    Date: 2013–06–30
    URL: http://d.repec.org/n?u=RePEc:esx:essedp:728&r=ecm
  3. By: Hübler, Olaf
    Abstract: This paper presents some selective aspects of standard econometric methods and of new developments in econometrics that are important for applications with microeconomic data. The range includes variance estimators, measurement of outliers, problems of partially identified parameters, nonlinear models, possibilities of instrumental variables, panel methods for models with time-invariant regressors, difference-in-differences estimators, matching procedures, treatment effects in quantile regression analysis and regression discontinuity approaches. These methods are applied to production functions with IAB establishment panel data.
    Keywords: Significance, standard errors, outliers, influential observations, partially identified parameters, unobserved heterogeneity, instrumental variables, panel estimators, quantile regressions, causality, treatment effects, DiD estimators, regression discontinuity
    JEL: C21 C26 D22 J53
    Date: 2013–07
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-513&r=ecm
  4. By: Hiroumi Misaki (Research Center for Advanced Science and Technology, University of Tokyo); Naoto Kunitomo (Faculty of Economics, University of Tokyo)
    Abstract:    For estimating the integrated volatility and covariance by using high frequency data, Kunitomo and Sato (2008, 2011) have proposed the Separating Information Maximum Likelihood (SIML) method when there are micro-market noises. The SIML estimator has reasonable finite sample properties and asymptotic properties when the sample size is large under general conditions with non-Gaussian processes or volatility models. We shall show that the SIML estimator has the asymptotic robustness property in the sense that it is consistent and has the stable convergence (i.e. the asymptotic normality in the deterministic case) when there are micro-market noises and the observed high-frequency data are sampled randomly with the underlying (continuous time) stochastic process. The SIML estimation has also reasonable finite sample properties with these effects.
    Date: 2013–06
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2013cf892&r=ecm
  5. By: Naoto Kunitomo (Faculty of Economics, University of Tokyo); Hiroumi Misaki (Research Center for Advanced Science and Technology, University of Tokyo)
    Abstract:    For estimating the integrated volatility and covariance by using high frequency data, Kunitomo and Sato (2008, 2011) have proposed the Separating Information Maximum Likelihood (SIML) method when there are micro-market noises. The SIML estimator has reasonable finite sample properties and asymptotic properties when the sample size is large under general conditions with non-Gaussian processes or volatility models. We shall show that the SIML estimation is useful for estimating the integrated covariance and hedging coefficient when we have micro-market noise and financial high frequency data are randomly sampled. The SIML estimation is consistent and has the stable convergence (i.e. the asymptotic normality in the deterministic case) and it has reasonable finite sample properties with these effects.
    Date: 2013–06
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2013cf893&r=ecm
  6. By: Rodica Gilles; Seik Kim
    Abstract: Count models often best describe the nature of data in health economics, but the presence of fixed effects with excess zeros and overdispersion strictly limits the choice of estimation methods. This paper presents a quasi-conditional likelihood method to consistently estimate models with excess zeros and unobserved individual heterogeneity when the true generating process is unknown. Monte Carlo simulation studies show that our zero-inflated quasi-conditional maximum likelihood (ZI-QCML) estimator outperforms other methods and is robust to distributional misspecifications. We apply the ZI-QCML estimator to analyze the frequency of doctor visits.
    Date: 2013–07
    URL: http://d.repec.org/n?u=RePEc:udb:wpaper:uwec-2013-03&r=ecm
  7. By: Asger Lunde (Aarhus University and CREATES); Anne Floor Brix (Aarhus University and CREATES)
    Abstract: In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from the two estimation methods without noise correction are studied. Second, a noise robust GMM estimator is constructed by approximating integrated volatility by a realized kernel instead of realized variance. The PBEFs are also recalculated in the noise setting, and the two estimation methods ability to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF based estimator in practice.
    Keywords: GMMestimation, Heston model, high-frequency data, integrated volatility, market microstructure noise, prediction-based estimating functions, realized variance, realized kernel
    JEL: C13 C22 C51
    Date: 2013–02–07
    URL: http://d.repec.org/n?u=RePEc:aah:create:2013-23&r=ecm
  8. By: Lechner, Michael
    Abstract: It is a major achievement of the econometric treatment effect literature to clarify under which conditions causal effects are non-parametrically identified. The first part of this chapter focuses on the static treatment model. In this part, I show how panel data can be used to improve the credibility of matching and instrumental variable estimators. In practice, these gains come mainly from the availability of outcome variables measured prior to treatment. Such outcome variables also foster the use of alternative identification strategies, in particular so-called difference-in-difference estimation. In addition to improving the credibility of static causal models, panel data may allow credibly estimating dynamic causal models, which is the main theme of the second part of this chapter.
    Keywords: Matching, instrumental variables, local average treatment effects, difference-in-difference estimation, dynamic treatment effects
    JEL: C22 C23 C32 C33
    Date: 2013–06
    URL: http://d.repec.org/n?u=RePEc:usg:econwp:2013:14&r=ecm
  9. By: KYU LEE SHIN (Educational Research Institute, Inha University); JIN SEO CHO (School of Economics, Yonsei University)
    Abstract: In this study, we introduce statistics for testing neglected nonlinearity using the extreme leaning machines introduced by Huang, Zhu, and Siew (2006, Neurocomputing) and call them ELMNN tests. The ELMNN tests are very convenient and can be widely applied because they are obtained as byproducts of estimating linear models, and they can serve as quick diagnostic test statistics complementing the computational burdens of other tests. For the proposed test statistics, we provide a set of regularity conditions under which they asymptotically follow a chi-squared distribution under the null and are consistent under the alternative. We conduct Monte Carlo experiments and examine how they behave when the sample size is finite. Our experiment shows that the tests exhibit the properties desired by the theory of this paper.
    Keywords: Extreme learning machines, neglected nonlinearity, Wald test, single layer feedforward network, asymptotic distribution
    Date: 2013–06
    URL: http://d.repec.org/n?u=RePEc:yon:wpaper:2013rwp-57&r=ecm
  10. By: Jarrow, Robert; Kwok, Simon
    Abstract: In spite of the popularity of model calibration in finance, empirical researchers have put more emphasis on model estimation than on the equally important goodness-of-fit problem. This is due partly to the ignorance of modelers, and more to the ability of existing statistical tests to detect specification errors. In practice, models are often calibrated by minimizing the sum of squared difference between the modelled and actual observations. It is challenging to disentangle model error from estimation error in the residual series. To circumvent the difficulty, we study an alternative way of estimating the model by exact calibration. We argue that standard time series tests based on the exact approach can better reveal model misspecifications than the error minimizing approach. In the context of option pricing, we illustrate the usefulness of exact calibration in detecting model misspecification. Under heteroskedastic observation error structure, our simulation results shows that the Black-Scholes model calibrated by exact approach delivers more accurate hedging performance than that calibrated by error minimization.
    Date: 2013–05
    URL: http://d.repec.org/n?u=RePEc:syd:wpaper:2123/9191&r=ecm
  11. By: Jakob Grazzini; Matteo G. Richiardi
    Abstract: Agent-based (AB) models are considered a promising tool for macroeconomic analysis. However, until estimation of AB models become a common practice, they will not get to the center stage of macroeconomics. Two difficulties arise in the estimation of AB models: (i) the criterion function has no simple analytical expression, and (ii) the aggregate properties of the model cannot be analytically understood. The first one calls for simulation-based estimation techniques; the second requires additional statistical testing in order to ensure that the simulated quantities are consistent estimators of the theoretical quantities. The possibly high number of parameters involved and the non-linearities in the theoretical quantities used for estimation add to the complexity of the problem. As these difficulties are also shared, though to a different extent, by DSGE models, we first look at the lessons that can be learned from this literature. We identify simulated minimum distance (SMD) as a practical approach to estimation of AB models, and we discuss the conditions which ensure consistency of SMD estimators in AB models.
    Keywords: Consistent Estimation, Method of Simulated Moments, Agent-based Models
    JEL: C15 C63
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:cca:wplabo:130&r=ecm
  12. By: Ardelean, Vlad; Pleier, Thomas
    Abstract: Nonparametric prediction of time series is a viable alternative to parametric prediction, since parametric prediction relies on the correct specification of the process, its order and the distribution of the innovations. Often these are not known and have to be estimated from the data. Another source of nuisance can be the occurrence of outliers. By using nonparametric methods we circumvent both problems, the specification of the processes and the occurrence of outliers. In this article we compare the prediction power for parametric prediction, semiparametric prediction and nonparamatric methods such as support vector machines and pattern recognition. To measure the prediction power we use the MSE. Furthermore we test if the increase in prediction power is statistically significant. --
    Keywords: Parametric prediction,Nonparametric prediction,Support Vector Regression,Outliers
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:zbw:iwqwdp:052013&r=ecm
  13. By: Franz Buscha (Westminster Business School, University of Westminster); Anna Conte (Strategic Interaction Group, Max Planck Institute of Economics, Jena)
    Abstract: This paper examines the consequences of introducing a normally distributed effect into a system where the dependent variable is ordered and the explanatory variable is ordered and endogenous. Using simulation techniques we show that a naïve bivariate ordered probit estimator which fails to take a mixed effect into account will result in inconsistent estimates even when identification conditions are optimal. Our results suggest this finding only applies to non-linear endogenous systems.
    Keywords: bivariate probit, bivariate ordered probit, mixed effects, endogenous binary variables, constant parameters
    JEL: C35 C36 C51
    Date: 2013–07–01
    URL: http://d.repec.org/n?u=RePEc:jrp:jrpwrp:2013-027&r=ecm
  14. By: Gobillon, Laurent (INED and Paris School of Economics); Magnac, Thierry (TSE)
    Abstract: In this paper, we investigate the use of interactive effect or linear factor models in regional policy evaluation. We contrast treatment effect estimates obtained by Bai (2009)'s least squares method with the popular difference in difference estimates as well as with estimates obtained using synthetic control approaches as developed by Abadie and coauthors. We show that difference in differences are generically biased and we derive the support conditions that are required for the application of synthetic controls. We construct an extensive set of Monte Carlo experiments to compare the performance of these estimation methods in small samples. As an empirical illustration, we also apply them to the evaluation of the impact on local unemployment of an enterprise zone policy implemented in France in the 1990s.
    Keywords: Policy evaluation, Linear factor models, Synthetic controls, Economic geography, Enterprise zones
    Date: 2013–07
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:27409&r=ecm
  15. By: Johannes Tang Kristensen (Aarhus University and CREATES)
    Abstract: The use of large-dimensional factor models in forecasting has received much attention in the literature with the consensus being that improvements on forecasts can be achieved when comparing with standard models. However, recent contributions in the literature have demonstrated that care needs to be taken when choosing which variables to include in the model. A number of different approaches to determining these variables have been put forward. These are, however, often based on ad-hoc procedures or abandon the underlying theoretical factormodel. In this paper we will take a different approach to the problem by using the LASSO as a variable selection method to choose between the possible variables and thus obtain sparse loadings from which factors or diffusion indexes can be formed. This allows us to build a more parsimonious factor model which is better suited for forecasting compared to the traditional principal components (PC) approach.We provide an asymptotic analysis of the estimator and illustrate its merits empirically in a forecasting experiment based on US macroeconomic data. Overall we find that compared to PC we obtain improvements in forecasting accuracy and thus find it to be an important alternative to PC.
    Keywords: Forecasting, FactorsModels, Principal Components Analysis, LASSO
    JEL: C38 C53 E27 E37
    Date: 2013–03–07
    URL: http://d.repec.org/n?u=RePEc:aah:create:2013-22&r=ecm
  16. By: Uysal, S. Derya (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria)
    Abstract: This paper provides doubly robust estimators for treatment effect parameters which are defined in multivalued treatment effect framework. We apply this method on a unique data set of British Cohort Study (BCS) to estimate returns to different levels of schooling. Average returns are estimated for entire population, as well as conditional on having a specific educational achievement. The analysis is carried out for female and male samples separately to capture possible gender differences. The results indicate that, on average, the percentage wage gain due to higher education versus any other lower educational attainment is higher for highly educated females than highly educated males.
    Keywords: Multivalued treatment, returns to schooling, doubly robust estimation
    JEL: C21 J24 I2
    Date: 2013–06
    URL: http://d.repec.org/n?u=RePEc:ihs:ihsesp:297&r=ecm
  17. By: Vanessa Berenguer Rico; Jesús Gonzalo
    Abstract: While co-integration theory is an ideal framework to study linear relationships among persistent economic time series, the intrinsic linearity in the concepts of integration and co-integration makes it unsuitable to study non-linear long run relations among persistent processes. This drawback hinders the empirical analysis of modern macroeconomics, which often addresses asymmetric responses to policy interventions, multiplicity of equilibria, transitions between regimes or polynomial approximations to unknown functions. In this paper, to cope with non-linear relations and consequently to generalise co-integration, we formalise the idea of co-summability. It is built upon the concept order of summability developed by Berenguer-Rico and Gonzalo (2013), which, in turn, was conceived to address non-linear transformations of persistent processes. Theoretically, a co-summable relationship is balanced -in terms of the variables involved having the same order of summability- and describes a long run equilibrium that can be non-linear -in the sense that the errors have a lower order of summability. To test for these types of equilibria, inference tools for balancedness and cosummability are designed and their asymptotic properties are analysed. Their finite sample performance is studied via Monte Carlo experiments. The practical strength of co-summability theory is shown through two empirical applications. Specifically, asymmetric preferences of central bankers and the environmental Kuznets curve hypothesis are studied through the lens of co-summability.
    Keywords: Balancedness, Co-integration, Co-summability, Non-linear co-integration, Non-linear processes, Persistence
    JEL: C01 C22
    Date: 2013–06
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:we1312&r=ecm
  18. By: Hrishikesh D. Vinod (Fordham University)
    Abstract: While moving block bootstrap (MBB) has been used for mildly dependent (m-dependent) time series, maximum entropy (ME) bootstrap (meboot) is perhaps the only tool for inference involving perfectly dependent, nonstationary time series, possibly subject to jumps, regime changes and gaps. This brief note describes the logic and provides the R code for two potential enhancements to the meboot algorithm in Vinod and Lopez-de-Lacalle (2009), available as the 'meboot' package of the R software. The first 'rescaling enhancement' adjusts the of meboot resampled elements so that the population variance of the ME density equals that of the original data. Our second 'symmetrizing enhancement' forces the ME density to be symmetric. One simulation involving inference for regression standard errors suggests that the symmetrizing enhancement of the meboot continues to outperform the MBB.
    Keywords: Maximum entropy, block bootstrap, variance, symmetry, R-software
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:frd:wpaper:dp2013-04&r=ecm
  19. By: Bernd Heidergott (VU University Amsterdam); Warren Volk-Makarewicz (VU University Amsterdam)
    Abstract: Quantiles play an important role in modelling quality of service in the service industry and in modelling risk in the financial industry. Recently, Hong showed in his breakthrough papers that efficient simulation based estimators can be obtained for quantile sensitivities by means of sample path differentiation. This has led to an intensive search for sample-path differentiation based estimators for quantile sensitivities. In this paper we present a novel approach to quantile sensitivity estimation. Our approach elaborates on the concept of measure-valued differentiation (MVD). Thereby, we overcome the main obstacle of the sample path approach which is the requirement that the sample cost have to be Lipschitz continuous with respect to the parameter of interest. Specifically, we perform a sensitivity analysis of the quantile of the value of a multi-asset option and a portfolio. In addition, we discuss application of our sensitivity estimator to the Variance-Gamma process and to queueing networks.
    Keywords: quantile, sensitivity analysis, Monte-Carlo simulation, measure-valued differentiation, options, multi-asset option, Variance-Gamma process
    JEL: C44 C13 C63
    Date: 2013–06–29
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20130082&r=ecm
  20. By: Dariusz Grech; Grzegorz Pamu{\l}a
    Abstract: We study quantitatively the level of false multifractal signal one may encounter while analyzing multifractal phenomena in time series within multifractal detrended fluctuation analysis (MF-DFA). The investigated effect appears as a result of finite length of used data series and is additionally amplified by the long-term memory the data eventually may contain. We provide the detailed quantitative description of such apparent multifractal background signal as a threshold in spread of generalized Hurst exponent values $\Delta h$ or a threshold in the width of multifractal spectrum $\Delta \alpha$ below which multifractal properties of the system are only apparent, i.e. do not exist, despite $\Delta\alpha\neq0$ or $\Delta h\neq 0$. We find this effect quite important for shorter or persistent series and we argue it is linear with respect to autocorrelation exponent $\gamma$. Its strength decays according to power law with respect to the length of time series. The influence of basic linear and nonlinear transformations applied to initial data in finite time series with various level of long memory is also investigated. This provides additional set of semi-analytical results. The obtained formulas are significant in any interdisciplinary application of multifractality, including physics, financial data analysis or physiology, because they allow to separate the 'true' multifractal phenomena from the apparent (artificial) multifractal effects. They should be a helpful tool of the first choice to decide whether we do in particular case with the signal with real multiscaling properties or not.
    Date: 2013–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1307.2014&r=ecm
  21. By: Blume, Lawrence E. (Department of Economics, Cornell University, Ithaca, USA, and Santa Fe Institute and IHS Vienna); Brock, William A. (Economics Department, University of Wisconsin-Madison, USA and University of Missouri, Columbia); Durlauf, Steven N. (Department of Economics, University of Wisconsin-Madison, USA); Jayaraman, Rajshri (European School of Management and Technology, Berlin, Germany)
    Abstract: This paper provides a systematic analysis of identification in linear social interactions models. This is both a theoretical and an econometric exercise as the analysis is linked to a rigorously delineated model of interdependent decisions. We develop an incomplete information game that describes individual choices in the presence of social interactions. The equilibrium strategy profiles are linear. Standard models in the empirical social interactions literature are shown to be exact or approximate special cases of our general framework, which in turn provides a basis for understanding the microeconomic foundations of those models. We consider identification of both endogenous (peer) and contextual social effects under alternative assumptions on a priori information about network structure available to an analyst, and contrast the informational content of individual-level and aggregated data. Finally, we discuss potential ramifications for identification of endogenous group selection and differences between the information sets of analysts and agents.
    Keywords: Social interactions, identification, incomplete information games
    JEL: C21 C23 C31 C35 C72 Z13
    Date: 2013–06
    URL: http://d.repec.org/n?u=RePEc:ihs:ihsesp:298&r=ecm
  22. By: Melstrom, Richard; Lupi, Frank
    Abstract: This paper proposes using a control function to correct for endogeneity in recreation demand models. The control function approach is contrasted with the method of alternative specific constants (ASCs), which has been cautiously promoted in the literature. As an application, we consider the case of travel cost endogeneity in the demand for Great Lakes recreational fishing. Using data on Michigan anglers, we employ a random utility model of site choice. We show that either ASCs or the control function can correct for travel cost endogeneity, although we find that the model with ASCs produces significantly weaker results. Overall, compared with traditional approaches control functions may offer a more flexible means to eliminate endogeneity in recreation demand models.
    Keywords: Recreation demand, random utility model, travel cost method, travel cost endogeneity, control function, alternative specific constants, recreational fishing
    JEL: Q22 Q25 Q26
    Date: 2012–08
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:48036&r=ecm
  23. By: Jorge E. Galán; Helena Veiga; Michael P. Wiper
    Abstract: Firms face a continuous process of technological and environmental changes that implies making managerial decisions in a dynamic context. However, costs and other constraints prevent firms from making instant adjustments towards optimal conditions and may cause inefficiency to be persistent in time. In this work, we propose a flexible dynamic model that makes possible to distinguish persistent effects in the inefficiency from firm inefficiency heterogeneity and to capture differences in the adjustment costs between firms. The new model is fitted to a ten year sample of Colombian banks. Our findings suggest that firm characteristics associated to size and foreign ownership have negative effects on inefficiency and separating these heterogeneity factors from the dynamics of inefficiency improves model fit. On the other hand, acquisitions are found to have positive and persistent effects on inefficiency. Colombian banks are found to present high inefficiency persistence but there exist important differences between institutions. In particular, merged banks present low costs of adjustment that allow them to recover rapidly the efficiency losses derived from merging processes
    Keywords: Banks efficiency, Bayesian inference, Dynamic effects, Persistent shocks, Heterogeneity, Stochastic frontier models
    JEL: C11 C22 C23 C51 D24 G21
    Date: 2013–06
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws131918&r=ecm
  24. By: Jean-Charles Croix (SAF - Laboratoire de Sciences Actuarielle et Financière - Université Claude Bernard - Lyon I : EA2429); Frédéric Planchet (SAF - Laboratoire de Sciences Actuarielle et Financière - Université Claude Bernard - Lyon I : EA2429); Pierre-Emmanuel Thérond (SAF - Laboratoire de Sciences Actuarielle et Financière - Université Claude Bernard - Lyon I : EA2429)
    Abstract: The Solvency 2 advent and the best-estimate methodology in future cash-flows valuation lead insurers to focus particularly on their assumptions. In mortality, hypothesis are critical as insurers use best-estimate laws instead of standard mortality tables. Backtesting methods, i.e. ex-post modelling validation processes, are encouraged by regulators and rise an increasing interest among practitioners and academics. In this paper, we propose a statistical approach (both parametric and non-parametric models compliant) for mortality laws backtesting under model risk. Afterwards, we'll introduce a specification risk supposing the mortality law true in average but subject to random variations. Finally, the suitability of our method will be assessed within this framework.
    Date: 2013–06–23
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-00839339&r=ecm
  25. By: Bodenhorn, Howard (Clemson University); Guinnane, Timothy W. (Yale University); Mroz, Thomas A. (Clemson University)
    Abstract: An extensive literature uses anthropometric measures, typically heights, to draw inferences about living standards in the past. This literature's influence reaches beyond economic history; the results of historical heights research appear as crucial components in development economics and related fields. The historical heights literature often relies on micro-samples drawn from sub-populations that are themselves selected: examples include volunteer soldiers, prisoners, and runaway slaves, among others. Contributors to the heights literature sometimes acknowledge that their samples might not be random draws from the population cohorts in question, but rely on normality alone to correct for potential selection into the sample. We use a simple Roy model to show that selection cannot be resolved simply by augmenting truncated samples for left-tail shortfall. Statistical tests for departures from normality cannot detect selection in Monte Carlo exercises for small to moderate levels of self-selection, obviating a standard test for selection in the heights literature. We show strong evidence of selection using micro-data on the heights of British soldiers in the late eighteen and nineteenth centuries. Consequently, widely accepted results in the literature may not reflect variations in living standards during a soldier's formative years; observed heights could be predominantly determined by the process determining selection into the sample. A survey of the current historical heights literature illustrates the problem for the three most common sources: military personnel, slaves, and prisoners.
    JEL: C46 C52 C81 I00 N30 O15 O47
    Date: 2013–05
    URL: http://d.repec.org/n?u=RePEc:ecl:yaleco:114&r=ecm
  26. By: Svend Rasmussen (Department of Food and Resource Economics, University of Copenhagen); Anders L. Madsen (HUGIN EXPERT A/S; Aalborg University); Mogens Lund (Department of Food and Resource Economics, University of Copenhagen)
    Abstract: The importance of risk management increases as farmers become more exposed to risk. But risk management is a difficult topic because income risk is the result of the complex interaction of multiple risk factors combined with the effect of an increasing array of possible risk management tools. In this paper we use Bayesian networks as an integrated modelling approach for representing uncertainty and analysing risk management in agriculture. It is shown how historical farm account data may be efficiently used to estimate conditional probabilities, which are the core elements in Bayesian network models. We further show how the Bayesian network model RiBay is used for stochastic simulation of farm income, and we demonstrate how RiBay can be used to simulate risk management at the farm level. It is concluded that the key strength of a Bayesian network is the transparency of assumptions, and that it has the ability to link uncertainty from different external sources to budget figures and to quantify risk at the farm level.
    Keywords: Bayesian network, Risk, Conditional probabilities, Stochastic simulation, Database, Farm account
    JEL: C11 C63 D81 Q12
    Date: 2013–05
    URL: http://d.repec.org/n?u=RePEc:foi:wpaper:2013_12&r=ecm

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.