nep-ecm New Economics Papers
on Econometrics
Issue of 2007‒05‒12
twenty-six papers chosen by
Sune Karlsson
Orebro University

  1. Splines for Financial Volatility By Francesco Audrino; Peter Bühlmann
  2. An Assessment of Alternative State Space Models for Count Time Series By Ralph D. Snyder; Gael M. Martin; Phillip Gould; Paul D. Feigin
  3. A Closed-Form Asymptotic Variance-Covariance Matrix for the Maximum Likelihood Estimator of the GARCH(1,1) Model By Jun Ma
  4. A note on model selection in (time series) regression models - General-to-specific or specific-to-general? By Herwartz, Helmut
  5. Split-Sample Score Tests in Linear Instrumental Variables Regression By Saraswata Chaudhuri; Thomas Richardson; James Robins; Eric Zivot
  6. The Chi-square Approximation of the Restricted Likelihood Ratio Test for the Sum of Autoregressive Coefficients with Interval Estimation By chen, willa; deo, rohit
  7. Diagnostic Tests of Cross Section Independence for Nonlinear Panel Data Models By Cheng Hsiao; M. Hashem Pesaran; Andreas Pick
  8. College Education and Wages in the U.K.: Estimating Conditional Average Structural Functions in Nonadditive Models with Binary Endogenous Variables By Tobias J. Klein
  9. Simultaneous probability statements for Bayesian P-splines By Andreas Brezger; Stefan Lang
  10. Spurious Inference in the GARCH(1,1) Model When It Is Weakly Identified By Jun Ma; Charles Nelson; Richard Startz
  11. Functional Form and Heterogeneity in Models for Count Data By William Greene
  12. Cointegration testing in dependent panels with breaks By Di Iorio, Francesca; Fachin, Stefano
  13. The Zero-Information-Limit-Condition and Spurious Inference in Weakly Identified Models By Charles Nelson; Richard Startz
  14. True and Apparent Scaling: The Proximity of the Markov- Switching Multifractal Model to Long-Range Dependence By Liu, Ruipeng; Di Matteo, Tiziana; Lux, Thomas
  15. Multicointegration, polynomial cointegration and I(2) cointegration with structural breaks. An application to the sustainability of the US external deficit. By Vanessa Berenguer-Rico; Josep Lluís Carrion-i-Silvestre
  16. A Comparison of Univariate Stochastic Volatility Models for U.S. Short Rates Using EMM Estimation By Ying Gu; Eric Zivot
  17. Evaluating Structural Models for the U.S. Short Rate Using EMM and Particle Filters By Drew Creal; Ying Gu; Eric Zivot
  18. Weak Instruments: A Guide to the Literature By Adrian Pagan
  19. From Animal Baits to Investors’ Preference: Estimating and Demixing of the Weight Function in Semiparametric Models for Biased Samples By Ya'acov Ritov; Wolfgang Härdle
  20. A framework for cut-off sampling in business survey design By Marco Bee; Roberto Benedetti; Giuseppe Espa
  21. The Relationship between the Beveridge-Nelson Decomposition and Unobserved Component Models with Correlated Shocks By Kum Hwa Oh; Eric Zivot; Drew Creal
  22. Asymptotic results for a generalized Pòlya urn and applications to clinical trials By Irene Crimanldi; Fabrizio Leisen
  23. Proxies for daily volatility By Robin G. de Vilder; Marcel P. Visser
  24. Implications of Two Measures of Persistence for Correlation Between Permanent and Transitory Shocks in U.S. Real GDP By Daisuke Nagakura; Eric Zivot
  25. Some Issues in Using Sign Restrictions for Identifying Structural VARs By Renee Fry; Adrian Pagan
  26. Effective global regularity and empirical modeling of direct, inverse and mixed demand systems By Keith R. McLaren; K.K. Gary Wong

  1. By: Francesco Audrino; Peter Bühlmann
    Abstract: We propose a flexible GARCH-type model for the prediction of volatility in financial time series. The approach relies on the idea of using multivariate B-splines of lagged observations and volatilities. Estimation of such a B-spline basis expansion is constructed within the likelihood framework for non-Gaussian observations. As the dimension of the B-spline basis is large, i.e. many parameters, we use regularized and sparse model fitting with a boosting algorithm. Our method is computationally attractive and feasible for large dimensions. We demonstrate its strong predictive potential for financial volatility on simulated and real data, also in comparison to other approaches, and we present some supporting asymptotic arguments.
    Keywords: Boosting, B-splines, Conditional variance, Financial time series, GARCH model, Volatility
    JEL: C13 C14 C22 C51 C53 C63
    Date: 2007–04
    URL: http://d.repec.org/n?u=RePEc:usg:dp2007:2007-11&r=ecm
  2. By: Ralph D. Snyder; Gael M. Martin; Phillip Gould; Paul D. Feigin
    Abstract: This paper compares two alternative models for autocorrelated count time series. The first model can be viewed as a 'single source of error' discrete state space model, in which a time-varying parameter is specified as a function of lagged counts, with no additional source of error introduced. The second model is the more conventional 'dual source of error' discrete state space model, in which the time-varying parameter is driven by a random autocorrelated process. Using the nomenclature of the literature, the two representations can be viewed as observation-driven and parameter-driven respectively, with the distinction between the two models mimicking that between analogous models for other non-Gaussian data such as financial returns and trade durations. The paper demonstrates that when adopting a conditional Poisson specification, the two models have vastly different dispersion/correlation properties, with the dual source model having properties that are a much closer match to the empirical properties of observed count series than are those of the single source model. Simulation experiments are used to measure the finite sample performance of maximum likelihood (ML) estimators of the parameters of each model, and ML-based predictors, with ML estimation implemented for the dual source model via a deterministic hidden Markov chain approach. Most notably, the numerical results indicate that despite the very different properties of the two models, predictive accuracy is reasonably robust to misspecification of the state space form.
    Keywords: Discrete state-space model; single source of error model; hidden Markov
    JEL: C13 C22 C46 C53
    Date: 2007–05
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2007-4&r=ecm
  3. By: Jun Ma
    Abstract: This paper presents a closed-form asymptotic variance-covariance matrix of the Maximum Likelihood Estimators (MLE) for the GARCH(1,1) model. Starting from the standard asymptotic result, a closed form expression for the information matrix of the MLE is derived via a local approximation. The closed form variance-covariance matrix of MLE for the GARCH(1,1) model can be obtained by inverting the information matrix. The Monte Carlo simulation experiments show that this closed form expression works well in the admissible region of parameters.
    Date: 2006–10
    URL: http://d.repec.org/n?u=RePEc:udb:wpaper:uwec-2006-11-r&r=ecm
  4. By: Herwartz, Helmut
    Abstract: The paper provides Monte Carlo evidence on the performance of general-to-specific and specific-to-general selection of explanatory variables in linear (auto)regressions. In small samples the former is markedly inefficient in terms of ex-ante forecasting performance.
    Keywords: Model selection, specification testing, Lagrange multiplier tests
    JEL: C22 C51
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:zbw:cauewp:5537&r=ecm
  5. By: Saraswata Chaudhuri; Thomas Richardson; James Robins (Departments of Epidemiology and Biostatistics, Harvard University); Eric Zivot
    Abstract: In this paper we design two split-sample tests for subsets of structural coefficients in a linear Instrumental Variables (IV) regression. Sample splitting serves two purposes – 1) validity of the resultant tests does not depend on the identifiability of the coefficients being tested and 2) it combines information from two unrelated samples one of which need not contain information on the dependent variable. The tests are performed on sub-sample one using the regression coefficients obtained from running the so-called first stage regression on subsample two (sample not containing information on the dependent variable). The first test uses the unbiased split-sample IV estimator of the remaining structural coefficients constrained by the hypothesized value of the structural coefficients of interest [see Angrist and Krueger (1995)]. We call this the USSIV score test. The USSIV score test is asymptotically equivalent to the standard score test based on sub-sample one when the standard regularity conditions are satisfied. However, the USSIV score test can be over-sized if the remaining structural coefficients are not identified. This motivates another test based on Robins (2004), which we call the Robins-test. The Robins-test is never oversized and if the remaining structural coefficients are identified, the Robins-test is asymptotically equivalent to USSIV score test against square-root-n local alternatives.
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:udb:wpaper:uwec-2007-10&r=ecm
  6. By: chen, willa; deo, rohit
    Abstract: The restricted likelihood (RL) of an autoregressive (AR) process of order one with intercept/trend possesses enormous advantages, such as yielding estimates with significantly reduced bias, powerful unit root tests, small curvature, a well-behaved likelihood ratio test (RLRT) near the unit root and confidence intervals with good coverage. Here we consider the RLRT for the sum of the coefficients in AR(p) processes with intercept/trend. We show that the limit of the leading error term in the chi-square approximation to the RLRT distribution is finite as the unit root is approached, implying a uniformly good approximation over the entire parameter space and well-behaved interval inference for nearly integrated processes. We extend the correspondence between the stationary AR coefficients and the partial autocorrelations to the unit root case and provide a simple unified representation of the RL for both stationary and integrated AR processes which eliminates the singularity at the unit root. The resulting parameter space is shown to be the bounded p-dimensional hypercube (-1,1]×(-1,1)^{p-1}, thus simplifying the optimisation. Confidence intervals for the sum of the AR coefficients are easily obtained from the RLRT as they are equivalent to intervals for a simple bounded function of the partial autocorrelations. An empirical application to the Nelson-Plosser data is provided.
    Keywords: curvature; confidence interval; autoregressive; near unit root; Bartlett correction
    JEL: C10 C22 C12
    Date: 2007–04–23
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:3002&r=ecm
  7. By: Cheng Hsiao (University of Southern California); M. Hashem Pesaran (CIMF, Cambridge University, University of Southern California and IZA); Andreas Pick (CIMF, Cambridge University)
    Abstract: In this paper we discuss tests for residual cross section dependence in nonlinear panel data models. The tests are based on average pair-wise residual correlation coefficients. In nonlinear models, the definition of the residual is ambiguous and we consider two approaches: deviations of the observed dependent variable from its expected value and generalized residuals. We show the asymptotic consistency of the cross section dependence (CD) test of Pesaran (2004). In Monte Carlo experiments it emerges that the CD test has the correct size for any combination of N and T whereas the LM test relies on T large relative to N. We then analyze the roll-call votes of the 104th U.S. Congress and find considerable dependence between the votes of the members of Congress.
    Keywords: cross-section dependence, nonlinear panel data model
    JEL: C12 C33 C35
    Date: 2007–04
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp2756&r=ecm
  8. By: Tobias J. Klein (University of Mannheim and IZA)
    Abstract: We propose and implement an estimator for identifiable features of correlated random coefficient models with binary endogenous variables and nonadditive errors in the outcome equation. It is suitable, e.g., for estimation of the average returns to college education when they are heterogeneous across individuals and correlated with the schooling choice. The estimated features are of central interest to economists and are directly linked to the marginal and average treatment effect in policy evaluation. The advantage of the approach that is taken in this paper is that it allows for non-trivial selection patterns. Identification relies on assumptions weaker than typical functional form and exclusion restrictions used in the context of classical instrumental variables analysis. In the empirical application, we relate wage levels, wage gains from a college degree and selection into college to unobserved ability. Our results yield a deepened understanding of individual heterogeneity which is relevant for the design of educational policy.
    Keywords: returns to college education, correlated random coefficient model, local instrumental variables, local linear regression
    JEL: C14 C31 J31
    Date: 2007–04
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp2761&r=ecm
  9. By: Andreas Brezger; Stefan Lang
    Abstract: P-splines are a popular approach for fitting nonlinear effects of continuous covariates in semiparametric regression models. Recently, a Bayesian version for P-splines has been developed on the basis of Markov chain Monte Carlo simulation techniques for inference. In this work we adopt and generalize the concept of Bayesian contour probabilities to additive models with Gaussian or multicategorical responses. More specifically, we aim at computing the maximum credible level (sometimes called Bayesian p-value) for which a particular parameter vector of interest lies within the corresponding highest posterior density (HPD) region. We are particularly interested in parameter vectors that correspond to a constant, linear or more generally a polynomial fit. As an alternative to HPD regions simultaneous credible intervals could be used to define pseudo contour probabilities. Efficient algorithms for computing contour and pseudo contour probabilities are developed. The performance of the approach is assessed through simulation studies. Two applications on the determinants of undernutrition in developing countries and the health status of trees show how contour probabilities may be used in practice to assist the analyst in the model building process.
    Keywords: Bayesian p-values, contour probabilities, generalized additive models, Rao-Blackwell estimator
    Date: 2007–05
    URL: http://d.repec.org/n?u=RePEc:inn:wpaper:2007-08&r=ecm
  10. By: Jun Ma; Charles Nelson; Richard Startz
    Abstract: This paper shows that the Zero-Information-Limit-Condition (ZILC) formulated by Nelson and Startz (2006) holds in the GARCH(1,1) model. As a result, the GARCH estimate tends to have too small a standard error relative to the true one when the ARCH parameter is small, even when sample size becomes very large. In combination with an upward bias in the GARCH estimate, the small standard error will often lead to the spurious inference that volatility is highly persistent when it is not. We develop an empirical strategy to deal with this issue and show how it applies to real datasets.
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:udb:wpaper:uwec-2006-14-p&r=ecm
  11. By: William Greene
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:ste:nystbu:07-10&r=ecm
  12. By: Di Iorio, Francesca; Fachin, Stefano
    Abstract: In this paper we propose panel cointegration tests allowing for breaks and cross-section dependence based on the Continuos-Path Block bootstrap. Simulation evidence shows that the proposed panel tests have satisfactory size and power properties, hence improving considerably on asymptotic tests applied to individual series. As an empirical illustration we examine investment and saving for a panel of European countries over the 1960-2002 period, finding, contrary to the results of most individual tests, that the hypothesis of a long-run relationship with breaks is compatible with the data
    Keywords: Panel cointegration; continuos-path block bootstrap; breaks; Feldstein-Horioka Puzzle.
    JEL: C23
    Date: 2007–05–09
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:3139&r=ecm
  13. By: Charles Nelson; Richard Startz
    Date: 2006–05
    URL: http://d.repec.org/n?u=RePEc:udb:wpaper:uwec-2006-07&r=ecm
  14. By: Liu, Ruipeng; Di Matteo, Tiziana; Lux, Thomas
    Abstract: In this paper, we consider daily financial data of a collection of different stock market indices, exchange rates, and interest rates, and we analyze their multi-scaling properties by estimating a simple specification of the Markov- switching multifractal model (MSM). In order to see how well the estimated models capture the temporal dependence of the data, we estimate and compare the scaling exponents H(q) (for q = 1; 2) for both empirical data and simulated data of the estimated MSM models. In most cases the multifractal model appears to generate `apparent' long memory in agreement with the empirical scaling laws.
    Keywords: scaling, generalized Hurst exponent, multifractal model, GMM estimation
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:zbw:cauewp:5534&r=ecm
  15. By: Vanessa Berenguer-Rico (Faculty of Economics, Juan Carlos III.); Josep Lluís Carrion-i-Silvestre (Faculty of Economics, University of Barcelona)
    Abstract: In this paper we model the multicointegration relation, allowing for one structural break. Since multicointegration is a particular case of polynomial or I(2) cointegration, our proposal can also be applied in these cases. The paper proposes the use of a residualbased Dickey-Fuller class of statistic that accounts for one known or unknown structural break. Finite sample performance of the proposed statistic is investigated by using Monte Carlo simulations, which reveals that the statistic shows good properties in terms of empirical size and power. We complete the study with an empirical application of the sustainability of the US external deficit. Contrary to existing evidence, the consideration of one structural break leads to conclude in favour of the sustainability of the US external deficit.
    Keywords: I(2) processes, multicointegration, polynomial cointegration, structural break, sustainability of external deficit.
    JEL: C12 C22
    Date: 2007–05
    URL: http://d.repec.org/n?u=RePEc:ira:wpaper:200709&r=ecm
  16. By: Ying Gu; Eric Zivot
    Abstract: In this paper, the efficient method of moments (EMM) estimation using a seminonparametric (SNP) auxiliary model is employed to determine the best fitting model for the volatility dynamics of the U.S. weekly three-month interest rate. A variety of volatility models are considered, including one-factor diffusion models, two-factor and three-factor stochastic volatility (SV) models, non-Gaussian diffusion models with Stable distributed errors, and a variety of Markov regime switching (RS) models. The advantage of using EMM estimation is that all of the proposed structural models can be evaluated with respect to a common auxiliary model. We find that a continuous-time twofactor SV model, a continuous-time three-factor SV model, and a discrete-time RS-involatility model with level effect can well explain the salient features of the short rate as summarized by the auxiliary model. We also show that either an SV model with a level effect or a RS model with a level effect, but not both, is needed for explaining the data. Our EMM estimates of the level effect are much lower than unity, but around 1/2 after incorporating the SV effect or the RS effect.
    Date: 2006–08
    URL: http://d.repec.org/n?u=RePEc:udb:wpaper:uwec-2006-17&r=ecm
  17. By: Drew Creal; Ying Gu; Eric Zivot
    Abstract: We combine the efficient method of moments with appropriate algorithms from the optimal filtering literature to study a collection of models for the U.S. short rate. Our models include two continuous-time stochastic volatility models and two regime switching models, which provided the best fit in previous work that examined a large collection of models. The continuous-time stochastic volatility models fall into the class of nonlinear, non-Gaussian state space models for which we apply particle filtering and smoothing algorithms. Our results demonstrate the effectiveness of the particle filter for continuous-time processes. Our analysis also provides an alternative and complementary approach to the reprojection technique of Gallant and Tauchen (1998) for studying the dynamics of volatility.
    Date: 2006–08
    URL: http://d.repec.org/n?u=RePEc:udb:wpaper:uwec-2006-18&r=ecm
  18. By: Adrian Pagan
    Abstract: Weak instruments have become an issue in many contexts in which econometric methods have been used. Some progress has been made into how one diagnoses the problem and how one makes an allowance for it. The present paper gives a partial survey of this literature, focussing upon some of the major contributions and trying to provide a relatively simple exposition of the proposed solutions.
    Date: 2007–03–03
    URL: http://d.repec.org/n?u=RePEc:qut:auncer:2007-7&r=ecm
  19. By: Ya'acov Ritov; Wolfgang Härdle
    Abstract: We consider two semiparametric models for the weight function in a biased sample model. The object of our interest parametrizes the weight function, and it is either Euclidean or non Euclidean. One of the models discussed in this paper is motivated by the estimation the mixing distribution of individual utility functions in the DAX market.
    Keywords: Mixture distribution, Inverse problem, Risk aversion, Exponential mixture, Empirical pricing kernel, DAX, Market utility function.
    JEL: C10 C14 D01 D81
    Date: 2007–05
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2007-024&r=ecm
  20. By: Marco Bee; Roberto Benedetti; Giuseppe Espa
    Abstract: In sampling theory the large concentration of the population with respect to most surveyed variables constitutes a problem which is difficult to tackle by means of classical tools. One possible solution is given by cut-off sampling, which explicitly prescribes to discard part of the population; in particular, if the population is composed by firms or establishments, the method results in the exclusion of the “smallest” firms. Whereas this sampling scheme is common among practitioners, its theoretical foundations tend to be considered weak, because the inclusion probability of some units is equal to zero. In this paper we propose a framework to justify cut-off sampling and to determine the census and cut-off thresholds. We use an estimation model which assumes as known the weight of the discarded units with respect to each variable; we compute the variance of the estimator and its bias, which is caused by violations of the aforementioned hypothesis. We develop an algorithm which minimizes the MSE as a function of multivariate auxiliary information at the population level. Considering the combinatorial optimization nature of the model, we resort to the theory of stochastic relaxation: in particular, we use the simulated annealing algorithm.
    Keywords: Cut-off sampling, skewed populations, model-based estimation, optimal stratification, simulated annealing
    JEL: C21 D92 L60 O18 R12
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:trn:utwpde:0709&r=ecm
  21. By: Kum Hwa Oh; Eric Zivot; Drew Creal
    Abstract: Many researchers believe that the Beveridge-Nelson decomposition leads to permanent and transitory components whose shocks are perfectly negatively correlated. Indeed, some even consider it to be a property of the decomposition. We demonstrate that the Beveridge-Nelson decomposition does not provide definitive information about the correlation between permanent and transitory shocks in an unobserved components model. Given an ARIMA model describing the evolution of U.S. real GDP, we show that there are many state space representations that generate the Beveridge-Nelson decomposition. These include unobserved components models with perfectly correlated shocks and partially correlated shocks. In our applications, the only knowledge we have about the correlation is that it lies in a restricted interval that does not include zero. Although the filtered estimates of the trend and cycle are identical for models with different correlations, the observationally equivalent unobserved components models produce different smoothed estimates.
    Date: 2006–07
    URL: http://d.repec.org/n?u=RePEc:udb:wpaper:uwec-2006-16&r=ecm
  22. By: Irene Crimanldi (Department of Mathematics, University of Bologna, Italy); Fabrizio Leisen (Department of Economics, University of Insubria, Italy)
    Abstract: In this paper a new Pòlya urn model is introduced and studied; in particular, a strong law of large numbers and two central limit theorems are proven. This urn generalizes a model studied in Berti et al. (2004), May et al. (2005) and in Crimaldi (2007) and it has natural applications in clinical trials. Indeed, the model include both delayed and missing (or null) responses. Moreover, a connection with the conditional identity in distribution of Berti et al. (2004) is given.
    Date: 2007–04
    URL: http://d.repec.org/n?u=RePEc:ins:quaeco:qf0705&r=ecm
  23. By: Robin G. de Vilder; Marcel P. Visser
    Abstract: High frequency data are often used to construct proxies for the daily volatility in discrete time volatility models. This paper introduces a calculus for such proxies, making it possible to compare and optimize them. The two distinguishing features of the approach are (1) a simple continuous time extension of discrete time volatility models and (2) an abstract definition of volatility proxy. The theory is applied to eighteen years worth of S&P 500 index data. It is used to construct a proxy that outperforms realized volatility.
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:pse:psecon:2007-11&r=ecm
  24. By: Daisuke Nagakura; Eric Zivot
    Abstract: Conventionally, shocks to permanent and transitory components in the unobserved components (UC) model for the log of real GDP are assumed to be uncorrelated. This assumption is mainly for identification of model parameters. In this paper, we show important implications of two popular measures of persistence for the correlation between permanent and transitory shocks in the UC model, and demonstrate that the correlation is negative for the log of U.S. real GDP under a very general specification of the cycle process.
    Date: 2007–01
    URL: http://d.repec.org/n?u=RePEc:udb:wpaper:uwec-2007-07&r=ecm
  25. By: Renee Fry; Adrian Pagan
    Abstract: The paper looks at estimation of structural VARs with sign restrictions. Since sign restrictions do not generate a unique model it is necessary to find some way of summarizing the information they yield. Existing methods present impulse responses from different models and it is argued that they should come from a common model. If this is not done the implied shocks implicit in the impulse responses will not be orthogonal. A method is described that tries to resolve this difficulty. It works with a common model whose impulse responses are as close as possible to the median values of the impulse responses (taken over the range of models satisfying the sign restrictions). Using a simple demand and supply model it is shown that there is no reason to think that sign restrictions will generate better quantitative estimates of the effects of shocks than existing methods such as assuming a system is recursive.
    Date: 2007–04–13
    URL: http://d.repec.org/n?u=RePEc:qut:auncer:2007-8&r=ecm
  26. By: Keith R. McLaren; K.K. Gary Wong
    Abstract: In this paper, we utilize the notion of "effective global regularity" and the intuition stemming from Cooper and McLaren (1996)'s General Exponential Form to develop a family of "composite" (product and ratio) direct, inverse and mixed demand systems. Apart from having larger regularity regions, the resulting specifications are also of potentially arbitrary rank, which can better approximate non-linear Engel curves. We also make extensive use of duality theory and a numerical inversion estimation method to rectify the endogeneity problem encountered in the estimation of the mixed demand systems. We illustrate the techniques by estimating different types of demand systems for Japanese quarterly meat and fish consumption. Results generally indicate that the proposed methods are promising, and may prove beneficial for modeling systems of direct, inverse and mixed demand functions in the future.
    Keywords: Effective Global Regularity; Mixed Demands; Conditional Indirect Utility Functions; Numerical Inversion Estimation Method
    JEL: D11 D12
    Date: 2007–05
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2007-2&r=ecm

This nep-ecm issue is ©2007 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.