
on Econometrics 
By:  Breunig, Christoph 
Abstract:  This paper proposes several tests of restricted specification in nonparametric instrumental regression. Based on series estimators, test statistics are established that allow for tests of the general model against a parametric or nonparametric specification as well as a test of exogeneity of the vector of regressors. The tests are asymptotically normally distributed under correct specification and consistent against any alternative model. Under a sequence of local alternative hypotheses, the asymptotic distribution of the tests is derived. Moreover, uniform consistency is established over a class of alternatives whose distance to the null hypothesis shrinks appropriately as the sample size increases. 
JEL:  C12 C14 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:mnh:wpaper:32111&r=ecm 
By:  Gholamreza Hajargsht, William E. Griffiths, Joseph Brice, D.S. Prasada Rao, Duangkamon Chotikapanich 
Abstract:  We develop a general approach to estimation and inference for income distributions using grouped or aggregate data that are typically available in the form of population shares and class mean incomes, with unknown group bounds. Generic moment conditions and an optimal weight matrix that can be used for GMM estimation of any parametric income distribution are derived. Our derivation of the weight matrix and its inverse allows us to express the seemingly complex GMM objective function in a relatively simple form that facilitates estimation. We show that our proposed approach, that incorporates information on class means as well as population proportions, is more efficient than maximum likelihood estimation of the multinomial distribution that uses only population proportions. In contrast to the earlier work of Chotikapanich et al. (2007, 2012), that did not specify a formal GMM framework, did not provide methodology for obtaining standard errors, and restricted the analysis to the beta2 distribution, we provide standard errors for estimated parameters and relevant functions of them, such as inequality and poverty measures, and we provide methodology for all distributions. A test statistic for testing the adequacy of a distribution is proposed. Using eight countries/regions for the year 2005, we show how the methodology can be applied to estimate the parameters of the generalized beta distribution of the second kind, and its specialcase distributions, the beta2, SinghMaddala, Dagum, generalized gamma and lognormal distributions. We test the adequacy of each distribution and compare predicted and actual income shares, where the number of groups used for prediction can differ from the number used in estimation. Estimates and standard errors for inequality and poverty measures are provided. 
Keywords:  GMM; Generalized beta distribution; Inequality and poverty. 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:mlb:wpaper:1140&r=ecm 
By:  William E. Griffiths and Gholamreza Hajargasht 
Abstract:  We show how the generalized method of moments (GMM) framework developed in Hajargasht et al. (2012) for estimating income distributions from grouped data can be adapted for estimating mixtures. This approach can be used to estimate a mixture of any distributions where the moments and moment distribution functions of the mixture components can be expressed in terms of the parameters of those components. The required expressions for mixtures of lognormal and gamma densities are provided; in our empirical work we focus on estimation of mixtures of lognormal distributions. Two and threecomponent lognormal mixtures are estimated for the income distributions of China rural, China urban, India rural, India urban, Pakistan, Russia, South Africa, Brazil and Indonesia. Their performance, in terms of goodnessoffit and validity of moment conditions, is compared with that of a generalized beta (GB2) distribution. We find that the threecomponent lognormal mixture always outperforms the GB2 distribution, but the twocomponent mixture does not. For Brazil and Indonesia we have single observations, making it possible to compare maximum likelihood estimation of the mixtures from a complete set of single observations with GMM estimates obtained after grouping the data. Estimates from both procedures are found to be comparable, lending support to the usefulness of the GMM approach. 
Keywords:  Lognormal distribution, Generalized beta distribution, Inequality measures 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:mlb:wpaper:1148&r=ecm 
By:  Meenagh, David; Minford, Patrick; Wickens, Michael R. 
Abstract:  We extend the method of indirect inference testing to data that is not filtered and so may be nonstationary. We apply the method to an open economy real businss cycle model on UK data. We review the method using a Monte Carlo experiment and find that it performs accurately and has good power. 
Keywords:  Bootstrap; DSGE; indirect inference; Monte Carlo; VECM 
JEL:  C12 C32 C52 E1 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:9058&r=ecm 
By:  Oliver Grothe (Department of Economic and Social Statistics, University of Cologne); Volodymyr Korniichuk (CGS, University of Cologne); Hans Manner (Department of Economic and Social Statistics, University of Cologne) 
Abstract:  We propose a new model that can capture the typical features of multivariate extreme events observed in financial time series, namely clustering behavior in magnitudes and arrival times of multivariate extreme events, and timevarying dependence. The model is developed in the framework of the peaksoverthreshold approach in extreme value theory and relies on a Poisson process with selfexciting intensity. We discuss the properties of the model, treat its estimation, deal with testing goodnessoffit, and develop a simulation algorithm. The model is applied to return data of two stock markets and four major European banks. 
Keywords:  Time Series, Peaks Over Threshold, Hawkes Processes, Extreme Value Theory 
JEL:  C32 C51 C58 G15 
Date:  2012–06–27 
URL:  http://d.repec.org/n?u=RePEc:cgr:cgsser:0306&r=ecm 
By:  Le, Vo Phuong Mai; Meenagh, David; Minford, Patrick; Wickens, Michael R. 
Abstract:  Using Monte Carlo experiments, we examine the performance of Indirect Inference tests of DSGE models, usually versions of the SmetsWouters New Keynesian model of the US postwar period. We compare these with tests based on direct inference (using the Likelihood Ratio), and on the Del NegroSchorfheide DSGEVAR weight. We find that the power of all three tests is substantial so that a false model will tend to be rejected by all three; but that the power of the indirect inference tests are by far the greatest, necessitating reestimation by indirect inference to ensure that the model is tested in its fullest sense. 
Keywords:  Bootstrap; DSGE; DSGEVAR weight; indirect inference; likelihood ratio; New Classical; New Keynesian; Wald statistic 
JEL:  C12 C32 C52 E1 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:9056&r=ecm 
By:  Pierre Perron (Boston University); Gabriel Rodriguez (Departamento de Economía  Pontificia Universidad Católica del Perú) 
Abstract:  We analyze di¤erent residualbased tests for the null of no cointegration using GLS detrended data. We nd and simulate the limiting distributions of these statistics when GLS demeaned and GLS detrended data are used. The distributions depend of the number of righthand side variables, the type of deterministic components used in the cointegration equation, and a nuisance parameter R2 which measures the longrun correlation between xt and yt. We present an extensive number of Figures which show the asymptotic power functions of the di¤erent statistics analyzed in this paper. The results show that GLS allows to obtain more asymptotic power in comparison with OLS detrending. The more simple residualbased tests (as the ADF) shows power gains for small values of R2 and for only one righthand side variable. This evidence is valid for R2 less than 0.4. Figures shows that when R2 is larger, the ECR statistics are better for any value of the righthand side variables. In particular, evidence shows that the ECR statistic which assumes a known cointegration vector is the most powerful. A set of simulated asymptotic critical values are also presented. Unlike other references, in the present framework we use di¤erent c for di¤erent number of righthand side variables (xt variables) and according to the set of deterministic components. In this selection, we use a R2 = 0:4, which appears to be a sensible choice. 
Keywords:  Cointegration, ResidualBased Unit Root Test, ECR Test, OLS and GLS Detrented Data, Hypothesis Testing 
JEL:  C2 C3 C5 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:pcp:pucwps:wp00327&r=ecm 
By:  Oliver Grothe 
Abstract:  Many nonlinear extensions of the Kalman filter, e.g., the extended and the unscented Kalman filter, reduce the state densities to Gaussian densities. This approximation gives sufficient results in many cases. However, this filters only estimate states that are correlated with the observation. Therefore, sequential estimation of diffusion parameters, e.g., volatility, which are not correlated with the observations is not possible. While other filters overcome this problem with simulations, we extend the measurement update of the Gaussian twomoment filters by a higher order correlation measurement update. We explicitly state formulas for a higher order unscented Kalman filter within a continuousdiscrete state space. We demonstrate the filter in the context of parameter estimation of an OrnsteinUhlenbeck process. 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1207.4300&r=ecm 
By:  Mitia Duerinckx; Christophe Ley; YvesCaoimhin Swan 
Abstract:  Abstract: Gauss’ principle states that the maximum likelihood estimator of the parameter in a location family is the sample mean for all samples of all sample sizes if and only if the family is Gaussian. There exist many extensions of this result in diverse directions. In this paper we propose a unified treatment of this literature. In doing so we define the fundamental concept of minimal necessary sample size at which a given characterization holds. Many of the cornerstone references on this topic are retrieved and discussed in light of our findings, and several new characterization theorems are provided. 
Keywords:  location parameter; maximum Likelihood estimator; minimal necessary sample size; MLE characterization; scale parameter; score function 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/122336&r=ecm 
By:  Tomasz Wozniak 
Abstract:  Spillover and contagion eects have gained significant interest in the recent years of financial crisis. Attention has not only been directed to relations between returns of financial variables, but to spillovers in risk as well. I use the family of Constant Conditional Correlation GARCH models to model the risk associated with financial time series and to make inferences about Granger causal relations between second conditional moments. The restrictions for secondorder Granger noncausality between two vectors of variables are derived. To assess the credibility of the noncausality hypotheses, I employ Bayes factors. Bayesian testing procedures have not yet been applied to the problem of testing Granger noncausality. Contrary to classical tests, Bayes factors make such testing possible, regardless of the form of the restrictions on the parameters of the model. Moreover, they relax the assumptions about the existence of higherorder moments of the processes required in classical tests. 
Keywords:  SecondOrder Causality; Volatility Spillovers; Bayes Factors; GARCH Models 
JEL:  C11 C12 C32 C53 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:mlb:wpaper:1139&r=ecm 
By:  Taras Bodnar; Nikolaus Hautsch; ; 
Abstract:  We introduce a copulabased dynamic model for multivariate processes of (nonnegative) highfrequency trading variables revealing timevarying conditional variances and correlations. Modeling the variables’ conditional mean processes using a multiplicative error model we map the resulting residuals into a Gaussian domain using a Gaussian copula. Based on highfrequency volatility, cumulative trading volumes, trade counts and market depth of various stocks traded at the NYSE, we show that the proposed copulabased transformation is supported by the data and allows disentangling (multivariate) dynamics in higher order moments. To capture the latter, we propose a DCCGARCH specification. We suggest estimating the model by composite maximum likelihood which is sufficiently flexible to be applicable in high dimensions. Strong empirical evidence for timevarying conditional (co)variances in trading processes supports the usefulness of the approach. Taking these higherorder dynamics explicitly into account significantly improves the goodnessoffit of the multiplicative error model and allows capturing timevarying liquidity risks. 
Keywords:  multiplicative error model, trading processes, copula, DCCGARCH, liquidity risk 
JEL:  C32 C58 C46 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2012044&r=ecm 
By:  Jan Heufer 
Abstract:  This paper provides an efficient way to generate a set of random choices on a set of budgets which satisfy the Generalised Axiom of Revealed Preferences (GARP), that is, they are consistent with utility maximisation. The choices are drawn from an approximate uniform distribution on the admissible region on each budget which ensures consistency with GARP, based on a Markovian Monte Carlo algorithm due to Smith (1984). This procedure can be used to extend Bronars‘ (1987) method as it can be used to approximate the power of tests for conditions for which GARP is a necessary but not sufficient condition (e.g., homotheticity, separability, risk aversion, etc.). For example, it allows to approximate the probability that a set of random choices which happens to satisfy GARP is also consistent with homotheticity. The approach can also be applied to production analysis and nonparametric tests of cost minimisation. 
Keywords:  Cost minimisation; GARP; hypothesis testing; Monte Carlo methods; nonparametric tests; test power; random optimal choices; revealed preference; utility maximisation 
JEL:  C14 C63 D11 D12 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:rwi:repape:0354&r=ecm 
By:  Stéphane Bonhomme (CEMFI, Centro de Estudios Monetarios y Financieros); Elena Manresa (CEMFI, Centro de Estudios Monetarios y Financieros) 
Abstract:  This paper introduces timevarying grouped patterns of heterogeneity in linear panel data models. A distinctive feature of our approach is that group membership is left unspecified. We estimate the model’s parameters using a “grouped fixedeffects” estimator that minimizes a leastsquares criterion with respect to all possible groupings of the crosssectional units. We rely on recent advances in the clustering literature for fast and efficient computation. Our estimator is higherorder unbiased as both dimensions of the panel tend to infinity, under conditions that we characterize. As a result, inference is not affected by the fact that group membership is estimated. We apply our approach to study the link between income and democracy across countries, while allowing for grouped patterns of unobserved heterogeneity. The results shed new light on the evolution of political and economic outcomes of countries. 
Keywords:  Discrete heterogeneity, panel data, fixed effects, democracy. 
JEL:  C23 
Date:  2012–06 
URL:  http://d.repec.org/n?u=RePEc:cmf:wpaper:wp2012_1208&r=ecm 
By:  Oliver Grothe; Stephan Nicklas 
Abstract:  Levy copulas are the most natural concept to capture jump dependence in multivariate Levy processes. They translate the intuition and many features of the copula concept into a time series setting. A challenge faced by both, distributional and Levy copulas, is to find flexible but still applicable models for higher dimensions. To overcome this problem, the concept of pair copula constructions has been successfully applied to distributional copulas. In this paper we develop the pair construction for Levy copulas (PLCC). Similar to pair constructions of distributional copulas, the pair construction of a ddimensional Levy copula consists of d(d1)/2 bivariate dependence functions. We show that only d1 of these bivariate functions are Levy copulas, whereas the remaining functions are distributional copulas. Since there are no restrictions concerning the choice of the copulas, the proposed pair construction adds the desired flexibility to Levy copula models. We provide detailed estimation and simulation algorithms and apply the pair construction in a simulation study. 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1207.4309&r=ecm 
By:  Rodolfo G. Campos; Ilina Reggio 
Abstract:  We study how estimators used to impute consumption in survey data are inconsistent due to measurement error in consumption. Previous research suggests instrumenting consumption to overcome this problem. We show that, if additional regressors are present in the estimation, then instrumenting consumption may still produce inconsistent estimators. This inconsistency arises from the likely correlation between those additional regressors and the measurement error, a correlation not directly observable. We propose an additional condition to be satisfied by the instrument that reduces and even eliminates measurement error bias in the presence of additional regressors. This condition is directly observable in the data. 
Keywords:  Consumption, Measurement error, Instrumental variables, Consumer expenditure survey, Panel study of income dynamics 
JEL:  C13 C26 E21 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:cte:werepe:we1219&r=ecm 
By:  Lorenzo Bencivelli (Bank of Italy); Massimiliano Marcellino (European University Institute, Bocconi University and CEPR); Gianluca Moretti (UBS Global asset management) 
Abstract:  This paper proposes the use of Bayesian model averaging (BMA) as a tool to select the predictors' set for bridge models. BMA is a computationally feasible method that allows us to explore the model space even in the presence of a large set of candidate predictors. We test the performance of BMA in nowcasting by means of a recursive experiment for the euro area and the three largest countries. This method allows flexibility in selecting the information set month by month. We find that BMA based bridge models produce smaller forecast error than fixed composition bridges. In an application to the euro area they perform at least as well as mediumscale factor models. 
Keywords:  business cycle analysis, forecasting, Bayesian model averaging, bridge models. 
JEL:  C22 C52 C53 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:bdi:wptemi:td_872_12&r=ecm 
By:  Giampiero M. Gallo (Dipartimento di Statistica, Universita` di Firenze); Edoardo Otranto (Università degli Studi di Messina, Dipartimento di Scienze Cognitive e della Formazione) 
Abstract:  Persistence and occasional abrupt changes in the average level characterize the dynamics of high frequency based measures of volatility. Since the beginning of the 2000s, this pattern can be attributed to the dot com bubble, the quiet period of expansion of credit between 2003 and 2006 and then the harsh times after the burst of the subprime mortgage crisis. We conjecture that the inadequacy of many econometric volatility models (a very high level of estimated persistence, serially correlated residuals) can be solved with an adequate representation of such a pattern. We insert a Markovian dynamics in a Multiplicative Error Model to represent the conditional expectation of the realized volatility, allowing us to address the issues of a slow moving average level of volatility and of a different dynamics across regime. We apply the model to realized volatility of the S&P500 index and we gauge the usefulness of such an approach by a more interpretable persistence, better residual properties, and an increased goodness of fit. 
Keywords:  Multiplicative Error Models, regime switching, realized volatility, volatility persistence 
JEL:  C22 C51 C52 C58 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:fir:econom:wp2012_02&r=ecm 
By:  Changhan Rhee; Peter W. Glynn 
Abstract:  In this paper, we introduce a new approach to constructing unbiased estimators when computing expectations of path functionals associated with stochastic differential equations (SDEs). Our randomization idea is closely related to multilevel Monte Carlo and provides a simple mechanism for constructing a finite variance unbiased estimator with "square root convergence rate" whenever one has available a scheme that produces strong error of order greater than 1/2 for the path functional under consideration. 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1207.2452&r=ecm 
By:  Tobias Scholl (Schumpeter Center for Clusters, Entrepreneurship and Innovation, Frankfurt University); Thomas Brenner (Working Group on Economic Geography and Location Research, Philipps University Marburg) 
Abstract:  We present a new statistical method that detects industrial clusters at a firm level. The proposed method does not divide space into subunits whereby it is not affected by the Modifiable Areal Unit Problem (MAUP). Our metric differs both in its calculation and interpretation from existing distanceâ€based metrics and shows four central properties that enable its meaningful usage for cluster analysis. The method fulfills all five criteria for a test of localization proposed by Duranton and Overman (2005). 
Keywords:  Spatial concentration, localization, clusters, MAUP, distancebased measures 
JEL:  C40 C60 R12 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:pum:wpaper:201202&r=ecm 
By:  Fischer, Matthias 
Abstract:  We introduce a new skewed and leptokurtic distribution derived from the hyperbolic secant distribution and Johnson's S transformation. Properties of this new distribution are given. Finally, we empirically demonstrate in the context of financial return data that its exibility is comparable to that of their most advanced peers.  
Keywords:  hyperbolic secant distribution,SUtransformation,skewness,leptokurtosis,polynomial tails 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:zbw:iwqwdp:032012&r=ecm 
By:  Thomas Lux 
Abstract:  Maximum likelihood estimation of discretely observed diffusion processes is mostly hampered by the lack of a closed form solution of the transient density. It has recently been argued that a most generic remedy to this problem is the numerical solution of the pertinent FokkerPlanck (FP) or forward Kol mogorov equation. Here we expand extant work on univariate diffusions to higher dimensions. We find that in the bivariate and trivariate cases, a numerical solution of the FP equation via alternating direction finite difference schemes yields results surprisingly close to exact maximum likelihood in a number of test cases. After providing evidence for the effciency of such a numerical approach, we illustrate its application for the estimation of a joint system of shortrun and medium run investor sentiment and asset price dynamics using German stock market data 
Keywords:  stochastic differential equations, numerical maximum likelihood, FokkerPlanck equation, finite difference schemes, asset pricing 
JEL:  C58 G12 C13 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:kie:kieliw:1781&r=ecm 
By:  Carl Schmertmann (Max Planck Institute for Demographic Research, Rostock, Germany) 
Abstract:  OBJECTIVE<br> I develop and explain a new method for interpolating detailed fertility schedules from agegroup data. The method allows estimation of fertility rates over a fine grid of ages, from either standard or nonstandard age groups. Users can calculate detailed schedules directly from the input data, using only elementary arithmetic.<br> <br> METHODS<br> The new method, the calibrated spline (CS) estimator, expands an abridged fertility schedule by finding the smooth curve that minimizes a squared error penalty. The penalty is based both on fit to the available agegroup data, and on similarity to patterns of <sub>1</sub>f<sub>x</sub> schedules observed in the Human Fertility Database (HFD) and in the US Census International Database (IDB).<br> <br> RESULTS<br> I compare the CS estimator to two very good alternative methods that require more computation: Beers interpolation and the HFD's splitting protocol. CS replicates known <sub>1</sub>f<sub>x</sub> schedules from <sub>5</sub>f<sub>x</sub> data better than the other two methods, and its interpolated schedules are also smoother.<br> <br> CONCLUSIONS<br> The CS method is an easily computed, flexible, and accurate method for interpolating detailed fertility schedules from agegroup data.<br> <br> COMMENTS<br> Data and R programs for replicating this paper’s results are available online at <a target="_blank" href="http://calibratedspline.schmert.n et">http://calibratedspline.schmert.net </a> 
JEL:  J1 Z0 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:dem:wpaper:wp2012022&r=ecm 
By:  Massimo Franchi ("Sapienza" Universita' di Roma); Paolo Paruolo (Universita' dell'Insubria) 
Abstract:  This paper shows that the poor man's invertibility condition in FernandezVillaverde et al. (2007) is, in general, sufficient but not necessary for fundamentalness; that is, a violation of this condition does not necessarily imply the impossibility of recovering the structural shocks of a DSGE via a VAR. The permanent income model in FernandezVillaverde et al. (2007) is used to illustrate this fact. A necessary and sufficient condition for fundamentalness is formulated and its relations with the poor man's invertibility condition are discussed. 
Keywords:  DSGE; VAR; invertibility; nonfundamentalness. 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:sas:wpaper:20124&r=ecm 