nep-ecm New Economics Papers
on Econometrics
Issue of 2012‒06‒25
34 papers chosen by
Sune Karlsson
Orebro University

  1. Robust inference on parameters via particle filters and sandwich covariance matrices By Arnaud Doucet; Neil Shephard
  2. Unit Root Vector Autoregression with volatility Induced Stationarity By Anders Rahbek; Heino Bohn Nielsen
  3. Robust Standard Errors in Transformed Likelihood Estimation of Dynamic Panel Data Models By Hayakawa, Kazuhiko; Pesaran, M. Hashem
  4. Quantile regression for long memory testing: A case of realized volatility By Uwe Hassler; Paulo M.M. Rodrigues; Antonio Rubia
  5. Asymptotic Normality for Weighted Sums of Linear Processes By K.M. Abadir; W. Distaso; L. Giraitis; H.L. Koul
  6. Econometric analysis of multivariate realised QML: efficient positive semi-definite estimators of the covariation of equity prices By Neil Shephard; Dacheng Xiu
  7. Conjugate and Conditional Conjugate Bayesian Analysis of Discrete Graphical Models of Marginal Independence By Ioannis Ntzoufras; Claudia Tarantola
  8. Propriétés à distance finie d'estimateurs du modèle dynamique en données de panel à effets fixes lorsque N<T : étude par simulation monte carlo By Kuikeu, Oscar
  9. Of Butterflies and Caterpillars: Bivariate Normality in the Sample Selection Model By Claudia PIGINI
  10. Nonparametric Estimation of Crop Yield Distributions: A Panel Data Approach By Wu, Ximing; Zhang, Yu Yvette
  11. Modeling financial time series with the skew slash distribution By Cristina G. de la Fuente; Pedro Galeano; Michael P. Wiper
  12. Factor-Based Forecasting in the Presence of Outliers: Are Factors Better Selected and Estimated by the Median than by The Mean? By Johannes Tang Kristensen
  13. On testing for non-linear and time irreversible probabilistic structure in high frequency ASX financial time series data By Phillip Wild; John Foster
  14. Implications of Missing Data Imputation for Agricultural Household Surveys: An Application to Technology Adoption By Gedikoglu, Haluk; Parcell, Joseph L.
  15. Robust volatility forecasts in the presence of structural breaks By Elena Andreou; Eric Ghysels; Constantinos Kourouyiannis
  16. Codependent VAR Models and the Pseudo-Structural Form By Trenkler, Carsten; Weber, Enzo
  17. Symmetric Price Transmission: A Copula Approach By Qiu, Feng
  18. Marginal Quantiles for Stationary Processes By Yves Dominicy; Siegfried Hörmann; Hiroaki Ogata; David Veredas
  19. Estimators of Binary Spatial Autoregressive Models: A Monte Carlo Study By Raffaella Calabrese; Johan A. Elkink
  20. Efficient computation of the cdf of the maximum distance between Brownian bridge and its concave majorant. By Karim, Filali; Balabdaoui, Fadoua
  21. Biases of Correlograms and of AR Representations of Stationary Series By Karim M. Abadir; Rolf Larsson
  22. Forecasting Value-at-Risk with Time-Varying Variance, Skewnessn and Kurtosis in an Exponential Weighted Moving Average Framework By A. Gabrielsen; P. Zagaglia; A. Kirchner; Z. Liu
  23. Comparing the Bias of Dynamic Panel Estimators in Multilevel Panels: Individual versus Grouped Data By Hendricks, Nathan P.; Smith, Aaron D.
  24. Estimating Gravity Equation Models in the Presence of Heteroskedasticity and Frequent Zeros By Xiong, Bo; Chen, Sixia
  25. Fixed Effects Maximum Likelihood Estimation of a Flexibly Parametric Proportional Hazard Model with an Application to Job Exits By Audrey Light; Yoshiaki Omori
  26. A New Approach to Investigate Market Integration: a Markov-Switching Autoregressive Model with Time-Varying Transition Probabilities By Zhao, Jieyuan; Goodwin, Barry K.; Pelletier, Denis
  27. The Dual of the Maximum Likelihood Method By Paris, Quirino
  28. Generating Tempered Stable Random Variates from Mixture Representation By Piotr Jelonek
  29. Can Value-Added Measures of Teacher Performance Be Trusted? By Guarino, Cassandra; Reckase, Mark D.; Wooldridge, Jeffrey M.
  30. Improving the performance of random coefficients demand models: the role of optimal instruments By Mathias REYNAERT; Frank VERBOVEN
  31. Time varying fractional cointegration By Simwaka, Kisu
  32. Basics of Levy processes By Ole E. Barndorff-Nielsen; Neil Shephard
  33. Econometric methods for financial crises. By Dumitrescu, Elena-Ivona
  34. Volatility forecast combinations using asymmetric loss functions By Elena Andreou; Constantinos Kourouyiannis; Andros Kourtellos

  1. By: Arnaud Doucet (Department of Statistics and Oxford-Man Institute, University of Oxford); Neil Shephard (Nuffield College, Dept of Economics and Oxford-Man Institute of Quantitative Finance, University of Oxford.)
    Abstract: Likelihood based estimation of the parameters of state space models can be carried out via a particle filter. In this paper we show how to make valid inference on such parameters when the model is incorrect. In particular we develop a simulation strategy for computing sandwich covariance matrices which can be used for asymptotic likelihood based inference. These methods are illustrated on some simulated data.
    Keywords: quasi-likelihood; particle filter; sandwich matrix; sequential Monte Carlo.
    JEL: C11 C15 C53 C58
    Date: 2012–06–01
  2. By: Anders Rahbek (University of Copenhagen and CREATES); Heino Bohn Nielsen (University of Copenhagen)
    Abstract: We propose a discrete-time multivariate model where lagged levels of the process enter both the conditional mean and the conditional variance. This way we allow for the empirically observed persistence in time series such as interest rates, often implying unit-roots, while at the same time maintain stationarity despite such unit-roots. Specifically, the model bridges vector autoregressions and multivariate ARCH models in which residuals are replaced by levels lagged. An empirical illustration using recent US term structure data is given in which the individual interest rates have unit roots, have no finite first-order moments, but remain strictly stationary and ergodic, while they co-move in the sense that their spread has no unit root. The model thus allows for volatility induced stationarity, and the paper shows conditions under which the multivariate process is strictly stationary and geometrically ergodic. Interestingly, these conditions include the case of unit roots and a reduced rank structure in the conditional mean, known from linear co-integration to imply non-stationarity. Asymptotic theory of the maximum likelihood estimators for a particular structured case (so-called self-exciting) is provided, and it is shown that square-root T convergence to Gaussian distributions apply despite unit roots as well as absence of finite first and higher order moments. Monte Carlo simulations confirm the usefulness of the asymptotics in finite samples.
    Keywords: Vector Autoregression, Unit-Root, Reduced Rank, Volatility Induced Stationarity, Term Structure, Double Autoregression
    JEL: C32
    Date: 2012–06–08
  3. By: Hayakawa, Kazuhiko (Hiroshima University); Pesaran, M. Hashem (University of Cambridge)
    Abstract: This paper extends the transformed maximum likelihood approach for estimation of dynamic panel data models by Hsiao, Pesaran, and Tahmiscioglu (2002) to the case where the errors are crosssectionally heteroskedastic. This extension is not trivial due to the incidental parameters problem that arises, and its implications for estimation and inference. We approach the problem by working with a mis-specified homoskedastic model. It is shown that the transformed maximum likelihood estimator continues to be consistent even in the presence of cross-sectional heteroskedasticity. We also obtain standard errors that are robust to cross-sectional heteroskedasticity of unknown form. By means of Monte Carlo simulation, we investigate the finite sample behavior of the transformed maximum likelihood estimator and compare it with various GMM estimators proposed in the literature. Simulation results reveal that, in terms of median absolute errors and accuracy of inference, the transformed likelihood estimator outperforms the GMM estimators in almost all cases.
    Keywords: dynamic panels, cross-sectional heteroskedasticity, Monte Carlo simulation, GMM estimation
    JEL: C12 C13 C23
    Date: 2012–05
  4. By: Uwe Hassler; Paulo M.M. Rodrigues; Antonio Rubia
    Abstract: In this paper we derive a quantile regression approach to formally test for long memory in time series. We propose both individual and joint quantile tests which are useful to determine the order of integration along the different percentiles of the conditional distribution and, therefore, allow to address more robustly the overall hypothesis of fractional integration. The null distributions of these tests obey standard laws (e.g., standard normal) and are free of nuisance parameters. The finite sample validity of the approach is established through Monte Carlo simulations, showing, for instance, large power gains over several alternative procedures under non-Gaussian errors. An empirical application of the testing procedure on different measures of daily realized volatility is presented. Our analysis reveals several interesting features, but the main finding is that the suitability of a long-memory model with a constant order of integration around 0.4 cannot be rejected along the different percentiles of the distribution, which provides strong support to the existence of long memory in realized volatility from a completely new perspective.
    JEL: C12 C22
    Date: 2012
  5. By: K.M. Abadir (Imperial College London, UK); W. Distaso (Imperial College London, UK); L. Giraitis (Queen Mary, University of London, UK); H.L. Koul (Michigan State University, USA)
    Abstract: We establish asymptotic normality of weighted sums of stationary linear processes with general triangular array weights and when the innovations in the linear process are martingale differences. The results are obtained under minimal conditions on the weights and as long as the process of conditional variances of innovations is covariance stationary with lag k auto-covariances tending to zero, as k tends to infinity. We also obtain weak convergence of weighted partial sum processes. The results are applicable to linear processes that have short or long memory or exhibit seasonal long memory behavior. In particular they are applicable to GARCH and ARCH(∞) models. They are also useful in deriving asymptotic normality of kernel estimators of a nonparametric regression function when errors may have long memory.
    Keywords: Linear process, weighted sum, Lindeberg-Feller
    Date: 2012–06
  6. By: Neil Shephard (Nuffield College, Dept of Economics and Oxford-Man Institute of Quantitative Finance, University of Oxford.); Dacheng Xiu (University of Chicago Booth School of Business)
    Abstract: Estimating the covariance and correlation between assets using high frequency data is challenging due to market microstructure effects and Epps effects. In this paper we extend Xiu’s univariate QML approach to the multivariate case, carrying out inference as if the observations arise from an asynchronously observed vector scaled Brownian model observed with error. Under stochastic volatility the resulting QML estimator is positive semi-definite, uses all available data, is consistent and asymptotically mixed normal. The quasi-likelihood is computed using a Kalman filter and optimised using a relatively simple EM algorithm which scales well with the number of assets. We derive the theoretical properties of the estimator and prove that it achieves the efficient rate of convergence. We show how to make it achieve the non-parametric efficiency bound for this problem. The estimator is also analysed using Monte Carlo methods and applied on equity data that are distinct in their levels of liquidity.
    Keywords: EM algorithm; Kalman filter; market microstructure noise; non-synchronous data; portfolio optimisation; quadratic variation; quasi-likelihood; semimartingale; volatility.
    JEL: C01 C14 C58 D53 D81
    Date: 2012–04–23
  7. By: Ioannis Ntzoufras (Department of Statistics, Athens University of Economics and Business); Claudia Tarantola (Department of Economics and Business, University of Pavia)
    Abstract: We propose a conjugate and conditional conjugate Bayesian analysis of models of marginal independence with a bi-directed graph representation. We work with Markov equivalent directed acyclic graphs (DAGs) obtained using the same vertex set with the addition of some latent vertices when required. The DAG equivalent model is characterised by a minimal set of marginal and conditional probability parameters. This allows us to use compatible prior distributions based on products of Dirichlet distributions. For models with DAG representation on the same vertex set, the posterior distribution and the marginal likelihood is analytically available, while for the remaining ones a data augmentation scheme introducing additional latent variables is required. For the latter, we estimate the marginal likelihood using Chib’s (1995) estimator. Additional implementation details including identifiability of such models is discussed. For all models, we also provide methodology for the computation of the posterior distributions of the marginal log-linear parameters based on a simple transformation of the simulated values of the probability parameters. We illustrate our method using a popular 4-way dataset.
    Keywords: Bi-directed graph, Chib’s marginal likelihood estimator, Contingency tables, Markov equivalent DAG, Monte Carlo computation.
    Date: 2012–06
  8. By: Kuikeu, Oscar
    Abstract: Using Monte Carlo experiments, we assessed the finite sample properties of dynamic panel data estimators with fixed effects when N<T with N very low, the results tell us that, within estimator is, among all estimators, the best when T>=30.
    Keywords: Panel data, Monte Carlo experiments, finite sample properties
    JEL: O47 C15 C33
    Date: 2012–06–14
  9. By: Claudia PIGINI (Universit… Politecnica delle Marche, Dipartimento di Scienze Economiche e Sociali)
    Abstract: Since the seminal paper by Heckman (1974), the sample selection model has been an essential tool for applied economists and arguably the most sensitive to sources of misspecification among the standard microeconometric models involving limited dependent variables. The need for alternative methods to get consistent estimates has led to a number of estimation proposals for the sample selection model under non-normality. There is a marked dichotomy in the literature that has developed in two conceptually different directions: the bivariate normality assumption can be either replaced, by using copulae, or relaxed/removed, relying on semi and nonparametric estimators. This paper surveys the more recent proposals on the estimation of sample selection model that deal with distributional misspecification giving the practitioner a unified framework of both parametric and semi-nonparametric options.
    Keywords: Sample selection model, bivariate normality, copulae, maximum likelihood, semiparametric methods
    JEL: C14 C18 C24 C46
    Date: 2012–06
  10. By: Wu, Ximing; Zhang, Yu Yvette
    Abstract: We propose a flexible nonparametric density estimator for panel data. One possible areas of application is estimation of crop yield distributions whose data tend to be short panels from many geographical units. Taking into account the panel structure of the data can likely improve the efficiency of the estimation when the crop distributions share some common futures over time and cross-sectionally. We apply this method to estimate annual average crop yields of 99 Iowa counties. The results demonstrate the usefulness of the proposed method to estimate simultaneously densities from a large number of cross-sectional units.
    Keywords: crop yields, density estimation, panel data, Crop Production/Industries, Productivity Analysis, Research Methods/ Statistical Methods,
    Date: 2012
  11. By: Cristina G. de la Fuente; Pedro Galeano; Michael P. Wiper
    Abstract: Financial returns often present moderate skewness and high kurtosis. As a consequence, it is natural to look for a model that is exible enough to capture these characteristics. The proposal is to undertake inference for a generalized autoregressive conditional heteroskedastic (GARCH) model, where the innovations are assumed to follow a skew slash distribution. Both classical and Bayesian inference are carried out. Simulations and a real data example illustrate the performance of the proposed methodology.
    Keywords: Financial returns, GARCH model, Kurtosis, Skew slash distribution, Skewness
    Date: 2012–06
  12. By: Johannes Tang Kristensen (Aarhus University and CREATES)
    Abstract: Macroeconomic forecasting using factor models estimated by principal components has become a popular research topic with many both theoretical and applied contributions in the literature. In this paper we attempt to address an often neglected issue in these models: The problem of outliers in the data. Most papers take an ad-hoc approach to this problem and simply screen datasets prior to estimation and remove anomalous observations.We investigate whether forecasting performance can be improved by using the original unscreened dataset and replacing principal components with a robust alternative. We propose an estimator based on least absolute deviations (LAD) as this alternative and establish a tractable method for computing the estimator. In addition to this we demonstrate the robustness features of the estimator through a number of Monte Carlo simulation studies. Finally, we apply our proposed estimator in a simulated real-time forecasting exercise to test its merits. We use a newly compiled dataset of US macroeconomic series spanning the period 1971:2–2011:4. Our findings suggest that the chosen treatment of outliers does affect forecasting performance and that in many cases improvements can be made using a robust estimator such as our proposed LAD estimator.
    Keywords: Forecasting, FactorsModels, Principal Components Analysis, Robust Estimation, Least Absolute Deviations
    JEL: C38 C53 E37
    Date: 2012–06–08
  13. By: Phillip Wild (School of Economics, The University of Queensland); John Foster (School of Economics, The University of Queensland)
    Abstract: In this paper, we present three nonparametric trispectrum tests that can establish whether the spectral decomposition of kurtosis of high frequency financial asset price time series is consistent with the assumptions of Gaussianity, linearity and time reversiblility. The detection of nonlinear and time irreversible probabilistic structure has important implications for the choice and implementation of a range of models of the evolution of asset prices, including Black-Sholes-Merton (BSM) option pricing model, ARCH/GARCH and stochastic volatility models. We apply the tests to a selection of high frequency Australian (ASX) stocks.
    Date: 2012
  14. By: Gedikoglu, Haluk; Parcell, Joseph L.
    Abstract: Missing data is a problem that occurs frequently in survey data. Missing data results in biased estimates and reduced efficiency for regression estimates. The objective of the current study is to analyze the impact of missing-data imputation, using multiple-imputation methods, on regression estimates for agricultural household surveys. The current study also analyzes the impact of multiple-imputation on regression results, when all the variables in the regression have missing observations. Finally, the current study compares the impact of univariate multiple imputation with multivariate normal multiple imputation, when some of the missing variables have discrete distribution. The results of the current study show that multivariate-normal multiple imputation performs better than univariate multiple imputation model, and overall both methods improve the efficiency of regression estimates.
    Keywords: Missing Data, Multiple Imputation, Bayesian Inference, Household Surveys, Research Methods/ Statistical Methods,
    Date: 2012–05–31
  15. By: Elena Andreou; Eric Ghysels; Constantinos Kourouyiannis
    Abstract: Financial time series often undergo periods of structural change that yield biased estimates or forecasts of volatility and thereby risk management measures. We show that in the context of GARCH diussion models ignoring structural breaks in the leverage coecient and the constant can lead to biased and inecient AR-RV and GARCH-type volatility estimates. Similarly, we nd that volatility forecasts based on AR-RV and GARCH-type models that take into account structural breaks by estimating the parameters only in the post-break period, signicantly outperform those that ignore them. Hence, we propose a Flexible Forecast Combination method that takes into account not only information from dierent volatility models, but from different subsamples as well. This methods consists of two main steps: First, it splits the estimation period in subsamples based on estimated structural breaks detected by a change-pointtest. Second, it forecasts volatility weighting information from all subsamples by minimizing particular loss function, such as the Square Error and QLIKE. An empirical application using the S&P 500 Index shows that our approach performs better, especially in periods of high volatility, compared to a large set of individual volatility models and simple averaging methods as well as Forecast Combinations under Regime Switching.
    Keywords: forecast, combinations, volatility, structural breaks
    Date: 2012–05
  16. By: Trenkler, Carsten; Weber, Enzo
    Abstract: This paper investigates whether codependence restrictions can be uniquely imposed on VAR and VEC models via the so-called pseudo-structural form used in the literature. Codependence of order q is given if a linear combination of autocorrelated variables eliminates the serial correlation after q lags. Importantly, maximum likelihood estimation and likelihood ratio testing are only possible if the codependence restrictions can be uniquely imposed. Applying the pseudo-structural form, our study reveals that this is not generally the case, but that unique imposition is guaranteed in several important special cases. Moreover, we discuss further issues, in particular upper bounds of the codependence order.
    Keywords: Codependence; VAR; cointegration; pseudo-structural form; serial correlation common features
    JEL: C32
    Date: 2012–06–11
  17. By: Qiu, Feng
    Abstract: In this paper, we introduce the copula approach to the empirical research of asymmetric price transmission. The proposed methodology serves as an appropriate improvement for investigating price co-movement and market integration as it allows for flexible dependence/ structures among price adjustments/reactions along supply chain markets. In addition, we address the potential bias and inconsistency issue that results from ignorance of the volatility trait of price or price changes. In the empirical application, we exploit a state-dependent copula method, with generalized autoregressive conditional heteroskedasticity (GARCH) as marginals, to construct bivariate dynamic copula-GARCH models in the U.S. hog supply chain. The method can simultaneously capture both volatility in univariate price changes and dynamic relationships among price movements.
    Keywords: Environmental Economics and Policy,
    Date: 2012–08–12
  18. By: Yves Dominicy; Siegfried Hörmann; Hiroaki Ogata; David Veredas
    Abstract: We establish the asymptotic normality of marginal sample quantiles for S–mixing vector stationary processes. S–mixing is a recently introduced and widely applicable notion of dependence. Results of some Monte Carlo simulations are given.
    Keywords: Quantiles; S-mixing
    Date: 2012–06
  19. By: Raffaella Calabrese (University of Milano-Bicocca); Johan A. Elkink (University College Dublin)
    Abstract: Most of the literature on spatial econometrics is primarily concerned with explaining continuous variables, while a variety of problems concern by their nature binary dependent variables. The goal of this paper is to provide a cohesive description and a critical comparison of the main estimators proposed in the literature for spatial binary choice models. The properties of such estimators are investigated using a theoretical and simulation study. To the authors’ knowledge, this is the first paper that provides a comprehensive Monte Carlo study of the estimators’ properties. This simulation study shows that the Gibbs estimator (LeSage 2000) performs best for low spatial autocorrelation, while the Recursive Importance Sampler (Beron and Vijverberg 2004) performs best for high spatial autocorrelation. The same results are obtained by increasing the sample size. Finally, the linearized General Method of Moments estimator (Klier and McMillen 2008) is the fastest algorithm that provides accurate estimates for low spatial autocorrelation and large sample size.
    Date: 2012–06–05
  20. By: Karim, Filali; Balabdaoui, Fadoua
    Abstract: In this paper, we describe two computational methods for calculating the cumulative distribution function and the upper quantiles of the maximal difference between a Brownian bridge and its concave majorant. The first method has two different variants that are both based on a Monte Carlo approach, whereas the second uses the Gaver–Stehfest (GS) algorithm for the numerical inversion of the Laplace transform. If the former method is straightforward to implement, it is very much outperformed by the GS algorithm, which provides a very accurate approximation of the cumulative distribution as well as its upper quantiles. Our numerical work has a direct application in statistics: the maximal difference between a Brownian bridge and its concave majorant arises in connection with a nonparametric test for monotonicity of a density or regression curve on [0,1]. Our results can be used to construct very accurate rejection region for this test at a given asymptotic level.
    Keywords: Monte Carlo; Monotonicity; Gaver–Stehfest algorithm; concave majorant; Brownian bridge;
    JEL: C15
    Date: 2012
  21. By: Karim M. Abadir (Imperial College London, UK); Rolf Larsson (Uppsala University, Sweden)
    Abstract: We derive the relation between the biases of correlograms and of estimates of auto-regressive AR(k) representations of stationary series, and we illustrate it with a simple AR example. The new relation allows for k to vary with the sample size, which is a representation that can be used for most stationary processes. As a result, the biases of the estimators of such processes can now be quantified explicitly and in a unified way.
    Keywords: Auto-correlation function (ACF) and correlogram, autoregressive (AR) representation, least-squares bias
    Date: 2012–06
  22. By: A. Gabrielsen; P. Zagaglia; A. Kirchner; Z. Liu
    Abstract: This paper provides an insight to the time-varying dynamics of the shape of the distribution of financial return series by proposing an exponential weighted moving average model that jointly estimates volatility, skewness and kurtosis over time using a modified form of the Gram-Charlier density in which skewness and kurtosis appear directly in the functional form of this density. In this setting VaR can be described as a function of the time-varying higher moments by applying the Cornish-Fisher expansion series of the first four moments. An evaluation of the predictive performance of the proposed model in the estimation of 1-day and 10-day VaR forecasts is performed in comparison with the historical simulation, filtered historical simulation and GARCH model. The adequacy of the VaR forecasts is evaluated under the unconditional, independence and conditional likelihood ratio tests as well as Basel II regulatory tests. The results presented have significant implications for risk management, trading and hedging activities as well as in the pricing of equity derivatives.
    JEL: C51 C52 C53 G15
    Date: 2012–06
  23. By: Hendricks, Nathan P.; Smith, Aaron D.
    Abstract: We propose the Grouped Coefficients estimator to reduce the bias of dynamic panels that have a multilevel structure to the coefficient and factor loading heterogeneity. If groups are chosen such that the within-group heterogeneity is small, then the grouped coefficients estimator can lead to a substantial bias reduction compared to fixed effects and Arellano-Bond estimators. We also compare the magnitude of the bias of panel estimators with individual versus aggregate data and show that the magnitude of the bias also depends on the proportion of the heterogeneity that is within groups. In an application to estimating corn acreage response to price, we find that the grouped coefficients estimator gives reasonable results. Fixed effects and Arellano-Bond estimates of the coefficient on the lagged dependent variable appear to be severely biased with county-level data. In contrast, if we randomly assign the fields to groups and aggregate within the random groups, then pooled OLS of the randomly aggregated data gives a reasonable estimate of the coefficient on the lagged dependent variable.
    Keywords: Land Economics/Use, Production Economics, Research Methods/ Statistical Methods,
    Date: 2012
  24. By: Xiong, Bo; Chen, Sixia
    Abstract: Gravity equation models are widely used in international trade to assess the impact of various policies on the patterns of trade. Although recent literature provides solid micro-foundations for the gravity equation model, there is no consensus on how to estimate a gravity equation model in the presence of the two stylized features of trade data: frequent zeros and heteroskedasticity. We propose a Two-Step Nonlinear Least Square estimator that satisfactorily deals with both problems. Monte-Carlo experiments show that the proposed estimator strictly outperforms the Poisson Pseudo Maximum Likelihood (PPML), the Heckman sample selection model, and the E.T.-Tobit estimators, and that it weakly dominates the Truncated PPML model in the estimation of the intensive margin of trade. An empirical study of world trade in 1986 suggests that currency union and regional trade agreements facilitate trade primarily through improving market access, as opposed to intensifying pre-existing trade.
    Keywords: gravity equation, heteroskedasticity, zeros, nonlinear least square, intensive margin, extensive margin, market access, Two-Step Nonlinear Least Square, International Relations/Trade, Research Methods/ Statistical Methods, F1, Q1, C5,
    Date: 2012
  25. By: Audrey Light (Department of Economics, Ohio State University); Yoshiaki Omori (Faculty of Economics, Yokohama National University)
    Abstract: We extend the fixed effects maximum likelihood estimator to a proportional hazard model with a flexibly parametric baseline hazard. We use the method to estimate a job duration model for young men, and show that failure to account for unobserved fixed effect causes negative schooling and union effects to be downward biased.
    Keywords: Proportional hazard model, fixed effects
    JEL: C41
    Date: 2012
  26. By: Zhao, Jieyuan; Goodwin, Barry K.; Pelletier, Denis
    Abstract: In this study, we develop a new approach to investigate spatial market integration. In particular, it is a Markov-Switching autoregressive (MSAR) model with time-varying state transition probabilities. Studying market integration is an effective way to test whether the law of one price holds across geographically separated markets, in other words, to test whether these markets perform efficiently or not. In this model, we assume that the parameters depend on a state variable which describes two unobservable states of markets – non-arbitrage and arbitrage – and is governed by a time-varying transition probability matrix. The main advantage of this model is that it allows transition probabilities to be time-varying. The probability of being in one state at time t depends on the previous state and the previous levels of market prices. An EM (Expectation-Maximization) algorithm is applied in the estimation of this model. For the empirical application, we examine market integration among four regional corn (Statesville, Candor, Cofield, Roaring River) and three regional soybean markets (Fayetteville, Cofield, and Creswell) in North Carolina. The prices of these markets are quoted daily from 3/1/2005 to 6/30/2010. Six pairwise spatial price relationships for the corn markets, and three pairwise spatial price relationships for the soybean markets are examined. Our results demonstrate that significant regime switching relationships characterize these markets. This has important implications for more conventional models of spatial price relationships and market integration. Our results are consistent with efficient arbitrage subject to transactions costs.
    Keywords: Research Methods/ Statistical Methods,
    Date: 2012
  27. By: Paris, Quirino
    Abstract: The Maximum Likelihood method estimates the parameter values of a statistical model that maximize the corresponding likelihood function, given the sample information. This is the primal approach that, in this paper, is presented as a mathematical programming specification whose solution requires the formulation of a Lagrange problem. A remarkable result of this setup is that the Lagrange multipliers associated with the linear statistical model (regarded as a set of constraints) is equal to the vector of residuals scaled by the variance of those residuals. The novel contribution of this paper consists in developing the dual model of the Maximum Likelihood method under normality assumptions. This model minimizes a function of the variance of the error terms subject to orthogonality conditions between the errors and the space of explanatory variables. An intuitive interpretation of the dual problem appeals to basic elements of information theory and establishes that the dual maximizes the net value of the sample information. This paper presents the dual ML model for a single regression and for a system of seemingly unrelated regressions.
    Keywords: Maximum likelihood, primal, dual, value of sample information, SUR, Demand and Price Analysis, Research Methods/ Statistical Methods, C1, C3,
    Date: 2012
  28. By: Piotr Jelonek
    Abstract: The paper presents a new method of random number generation for tempered stable distribution. This method is easy to implement, faster than other available approaches when tempering is moderate and more accurate than the benchmark. All the results are given as parametric formulas that may be directly used by practitioners.
    Keywords: heavy tails; random number generation; tempered stable distribution
    JEL: C15 C46 C63
    Date: 2012–06
  29. By: Guarino, Cassandra (Indiana University); Reckase, Mark D. (Michigan State University); Wooldridge, Jeffrey M. (Michigan State University)
    Abstract: We investigate whether commonly used value-added estimation strategies can produce accurate estimates of teacher effects. We estimate teacher effects in simulated student achievement data sets that mimic plausible types of student grouping and teacher assignment scenarios. No one method accurately captures true teacher effects in all scenarios, and the potential for misclassifying teachers as high- or low-performing can be substantial. Misspecifying dynamic relationships can exacerbate estimation problems. However, some estimators are more robust across scenarios and better suited to estimating teacher effects than others.
    Keywords: education, teacher labor markets, value-added, student achievement
    JEL: I20 J08 J24 J44 J45
    Date: 2012–05
  30. By: Mathias REYNAERT; Frank VERBOVEN
    Abstract: We shed new light on the performance of Berry, Levinsohn and Pakes’ (1995) GMM estimator of the aggregate random coefficient logit model. Based on an extensive Monte Carlo study, we show that the use of Chamberlain’s (1987) optimal instruments overcomes most of the problems that have recently been documented with standard, non-optimal instruments. Optimal instruments reduce small sample bias, but prove even more powerful in increasing the estimator’s efficiency and stability. Other recent methodological advances (MPEC, polynomial-based integration of the market shares) greatly improve computational speed, but they are only successful in terms of bias and efficiency when combined with optimal instruments.
    Date: 2012–06
  31. By: Simwaka, Kisu
    Abstract: According to Engle and Granger (1987), the concept of fractional cointegration was introduced to generalize the traditional cointegration to the long memory framework. In this paper, we extend the fractional cointegration model in Johansen (2008) and propose a time-varying framework, in which the fractional cointegrating relationship varies over time. In this case, the Johansen (2008) fractional cointegration setup is treated as a special case of our model.
    Keywords: Time varying Fractional cointegration
    JEL: C32 C10
    Date: 2012–06–10
  32. By: Ole E. Barndorff-Nielsen; Neil Shephard
    Abstract: This is a draft Chapter from a book by the authors on "Levy Driven Volatility Models"
    JEL: C22 C63 G10
    Date: 2012
  33. By: Dumitrescu, Elena-Ivona (Maastricht University)
    Date: 2012
  34. By: Elena Andreou; Constantinos Kourouyiannis; Andros Kourtellos
    Abstract: The paper deals with the problem o fmodel uncertainty in the forecasting volatility using forecast combinations and a flexible family of asymmetric loss functions that allow for the possibility that an investor would attach different preferences to high vis-a-vis low volatility periods. Using daily as well as 5 minute data for US and major international stock market indices we provide volatility forecasts by minimizing the Homogeneous Robust Loss function of the Realized Volatility and the combined forecast. Our findings show that forecast combinations based on the homogeneous robust loss function significantly outperform simple forecast combination methods, especially during the period of recent financial crisis.
    Keywords: asymetric loss functions, forecast combinations, realized volatility, volatility forecasting
    Date: 2012–05

This nep-ecm issue is ©2012 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.