
on Econometrics 
By:  Chen, Songxi 
Abstract:  This paper considers the problem of parameter estimation in a general class of semiparametric models when observations are subject to missingness at random. The semiparametric models allow for estimating functions that are nonsmooth with respect to the parameter. We propose a nonparametric imputation method for the missing values, which then leads to imputed estimating equations for the finite dimensional parameter of interest. The asymptotic normality of the parameter estimator is proved in a general setting, and is investigated in detail for a number of specific semiparametric models. Finally, we study the small sample performance of the proposed estimator via simulations. 
Keywords:  Copulas; imputation; kernel smoothing; missing at random; nuisance function; partially linear model; semiparametric model; single index model. 
JEL:  C0 C1 C2 C3 C4 C5 C6 C7 C8 C9 G0 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:46216&r=ecm 
By:  Masahiro Kojima (Graduate School of Economics, University of Tokyo); Tatsuya Kubokawa (Faculty of Economics, University of Tokyo) 
Abstract:  ã€€ã€€ Consider the problem of testing a linear hypothesis of regression coefficients in a general linear regression model with an error term having a covariance matrix involving several nuisance parameters. Three typical test statistics of Wald, Score and Likelihood Ratio (LR) and their Bartlett adjustments have been derived in the literature when the unknown nuisance parameters are estimated by maximum likelihood (ML). On the other hand, statistical inference in linear mixed models has been studied actively and extensively in recent years with applications to smallarea estimation. The marginal distribution of the linear mixed model is included in the framework of the general linear regression model, and the nuisance parameters correspond to the variance components and others in the linear mixed model. Although the restricted ML (REML), minimum norm quadratic unbiased estimator (MINQUE) and other specific estimators are available for estimating the variance components, the Bartlett adjustments given in the literature are not correct for those estimators other than ML. In this paper, using the Taylor series expansion, we derive the Bartlett adjustments of the Wald, Score and modified LR tests for general consistent estimators of the unknown nuisance parameters. These analytical results may be harder to calculate for a model with a complicate structure of the covariance matrix. Thus, we propose the simple parametric bootstrap methods for estimating the Bartlett adjustments and show that they have the second order accuracy. Finally, it is shown that both Bartlett adjustments work well through simulation experiments in the nested error regression model. 
Date:  2013–04 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2013cf884&r=ecm 
By:  Qiu , Yumou; Chen, Songxi 
Abstract:  Motivated by the latest effort to employ banded matrices to estimate a highdimensional covariance Σ , we propose a test for Σ being banded with possible diverging bandwidth. The test is adaptive to the “large p , small n ” situations without assuming a specific parametric distribution for the data. We also formulate a consistent estimator for the bandwidth of a banded highdimensional covariance matrix. The properties of the test and the bandwidth estimator are investigated by theoretical evaluations and simulation studies, as well as an empirical analysis on a protein mass spectroscopy data. 
Keywords:  Banded covariance matrix,Bandwidth estimation,High data dimension,Large p,small n,Nonparametric. 
JEL:  C0 C1 C2 C3 C4 C5 C6 C7 C8 C9 G0 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:46242&r=ecm 
By:  Chen, Songxi; Peng, Liang; Yu, Cindy 
Abstract:  Markov processes are used in a wide range of disciplines, including finance. The transition densities of these processes are often unknown. However, the conditional characteristic functions are more likely to be available, especially for Lévydriven processes. We propose an empirical likelihood approach, for both parameter estimation and model specification testing, based on the conditional characteristic function for processes with either continuous or discontinuous sample paths.Theoretical properties of the empirical likelihood estimator for parameters and a smoothed empirical likelihood ratio test for a parametric specification of the process are provided. Simulations and empirical case studies are carried out to confirm the effectiveness of the proposed estimator and test. 
Keywords:  Conditional characteristic function; Diffusion processes; Empirical likelihood;Kernel smoothing; L´evy driven processes 
JEL:  C0 C1 C2 C3 C4 C5 C6 C7 C8 C9 G0 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:46273&r=ecm 
By:  Chen, Songxi 
Abstract:  The conventional Wilcoxon/MannWhitney test can be invalid for comparing treatment effects in the presence of missing values or in observational studies. This is because the missingness of the outcomes or the participation in the treatments may depend on certain pretreatment variables. We propose an approach to adjust the MannWhitney test by correcting the potential bias via consistently estimating the conditional distributions of the outcomes given the pretreatment variables. We also propose semiparametric extensions of the adjusted MannWhitney test which leads to dimension reduction for high dimensional covariate. A novel bootstrap procedure is devised to approximate the null distribution of the test statistics for practical implementations. Results from simulation studies and an economic observational study data analysis are presented to demonstrate the performance of the proposed approach. 
Keywords:  Dimension reduction; Kernel smoothing; MannWhitney statistic; Missing outcomes;Observational studies;Selection bias. 
JEL:  C0 C1 C2 C3 C4 C5 C6 C7 C8 C9 G0 
Date:  2013–01 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:46239&r=ecm 
By:  Hendrik Kaufmannz (Leibniz University Hannover); Robinson Kruse (Leibniz University Hannover and CREATES) 
Abstract:  This paper provides a comprehensive Monte Carlo comparison of different finitesample biascorrection methods for autoregressive processes. We consider classic situations where the process is either stationary or exhibits a unit root. Importantly, the case of mildly explosive behaviour is studied as well. We compare the empirical performance of an indirect inference estimator (Phillips, Wu, and Yu, 2011), a jackknife approach (Chambers, 2013), the approximately medianunbiased estimator by Roy and Fuller (2001) and the bootstrap aided estimator by Kim (2003). Our findings suggest that the indirect inference approach o ers a valuable alternative to other existing techniques. Its performance (measured by its bias and root mean squared error) is balanced and highly competitive across many different settings. A clear advantage is its applicability for mildly explosive processes. In an empirical application to a long annual US Debt/GDP series we consider rolling window estimation of autoregressive models. We find substantial evidence for timevarying persistence and periods of explosiveness during the Civil War and World War II. During the recent years, the series is nearly explosive again. Further applications to commodity and interest rate series are considered as well. 
Keywords:  Biascorrection, Explosive behavior, Rolling window estimation 
JEL:  C13 C22 H62 
Date:  2013–04–15 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201310&r=ecm 
By:  Huber, Martin 
Abstract:  This papers proposes a simple method for testing whether noncompliance in experiments is ignorable, i.e., not jointly related to the treatment and the outcome. The approach consists of (i) regressing the outcome variable on a constant, the treatment, the assignment indicator, and the treatment/assignment interaction and (ii) testing whether the coefficients on the latter two variables are jointly equal to zero. A brief simulation study illustrates the finite sample properties of the test. 
Keywords:  Experiment, treatment effects, noncompliance, endogeneity, test. 
JEL:  C12 C21 C26 
Date:  2013–04 
URL:  http://d.repec.org/n?u=RePEc:usg:econwp:2013:12&r=ecm 
By:  Das, Arabinda 
Abstract:  It has been argued that the deterministic frontier approach in inefficiency measurement has a major limitation as inefficiency is mixed with measurement error (statistical noise) in this approach. The result is that inefficiency is contaminated with noise. Later stochastic frontier approach improves the situation with allowing a statistical noise in the model which captures all other factors other than inefficiency. The stochastic frontier model has been used for inefficiency analysis despite its complicated form and estimation procedure. This paper introduced an extra parameter which estimates the amount of proportion that an error component shares in the observational error. An EM estimation approach is used for estimation of the model and a test procedure is developed to test the significance of presence of the error component in the observational error. 
Keywords:  stochastic frontier model, skewnormal distribution, identification, EM algorithm, Monte Carlo simulation. 
JEL:  C15 C21 C51 
Date:  2013–04–13 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:46168&r=ecm 
By:  Anne Neumann; Maria Nieswand; Torben Schubert 
Abstract:  Nonparametric efficiency analysis has become a widely applied technique to support industrial benchmarking as well as a variety of incentivebased regulation policies. In practice such exercises are often plagued by incomplete knowledge about the correct specifications of inputs and outputs. Simar and Wilson (2001) and Schubert and Simar (2011) propose restriction tests to support such specification decisions for crosssection data. However, the typical oligopolized market structure pertinent to regulation contexts often leads to low numbers of crosssection observations, rendering reliable estimation based on these tests practically unfeasible. This smallsample problem could often be avoided with the use of panel data, which would in any case require an extension of the crosssection restriction tests to handle panel data. In this paper we derive these tests. We prove the consistency of the proposed method and apply it to a sample of US natural gas transmission companies in 2003 through 2007. We find that the total quantity of gas delivered and gas delivered in peak periods measure essentially the same output. Therefore only one needs to be included. We also show that the length of mains as a measure of transportation service is nonredundant and therefore must be included. 
Keywords:  Benchmarking models, Network industries, Nonparametric efficiency estimation, Data envelopment analysis, Testing restrictions, Subsampling, Bootstrap. 
JEL:  C14 L51 L95 
Date:  2013–03 
URL:  http://d.repec.org/n?u=RePEc:rsc:rsceui:2013/13&r=ecm 
By:  Tinkl, Fabian 
Abstract:  In this article consistency and asymptotic normality of the quasimaximum likelihood esti mator (QMLE) in the class of polynomial augmented generalized autoregressive conditional heteroscedasticity models (GARCH) is proven. The result extend the results of (Berkes et al., 2003) and (Francq and Zaköian, 2004) of the standard GARCH model to augmented GARCH models introduced by (Duan, 1997) which contains many commonly employed GARCH models as special cases. The conditions for consistency and asymptotic normality are more tractable than the ones discussed in (Straumann and Mikosch, 2006).  
Keywords:  asymptotic normality,consistency,polynomial augmented GARCH models,quasimaximum likelihood estimation 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:zbw:iwqwdp:032013&r=ecm 
By:  Robinson Kruse (Leibniz University Hannover and CREATES); Daniel VentosaSantaulària (Centro de Investigación y Docencia Económicas, CIDE); Antonio E. Noriega (Banco de México) 
Abstract:  Declining inflation persistence has been documented in numerous studies. When such series are analyzed in a regression framework in conjunction with other persistent time series, spurious regressions are likely to occur. We propose to use the coefficient of determination R2 as a test statistic to distinguish between spurious and genuine regressions in situations where time series possibly (but not necessarily) exhibit changes in persistence. To this end, we establish some limit theory for the R2 statistic and conduct a Monte Carlo study where we investigate its finitesample properties. Finally, we apply the test to the Fisher equation for the U.S. and Mexico. Contrary to a rejection using cointegration techniques, the R2based test offers strong evidence favourable to the Fisher hypothesis. 
Keywords:  Changes in persistence, Spurious regression, Fisher hypothesis. 
JEL:  C12 C22 E31 E43 
Date:  2013–11–04 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201311&r=ecm 
By:  Aquaro, M. (Tilburg University) 
Abstract:  Panel data sets, also called longitudinal data sets, are sets of data where the same units (for instance individuals, firms, or countries) are observed more than one time. Models that exploit the specific structure of these data sets are called panel data models. One of the main advantage of using these models is the possibility of appropriately including unobserved variables characterizing individual heterogeneity and heterogeneity of individual decisions. In the last sixty years, panel data and methods of econometric analysis appropriate to such data have become increasingly important in the discipline. Unfortunately, almost all related literature focuses on models assuming that data are free of outlying or aberrant observations. This is often not the case in reality. The majority of the regression methods used in linear panel data models are very sensitive to data contamination and outliers. This doctoral thesis focuses on the estimation of linear panel data models with and without outliers. It consists of two parts. In the first part, some new estimation methods are proposed for static (Chapter 2) and dynamic (Chapter 3) models when data sets are assumed to be contaminated by outlying or aberrant observations. The second part (Chapter 4) is a contribution to the theory of estimation of dynamic models when data are assumed not to be contaminated. 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:ner:tilbur:urn:nbn:nl:ui:125904974&r=ecm 
By:  Josep Lluís CarrioniSilvestre (Faculty of Economics, University of Barcelona); María Dolores Gadea (Department of Applied Economics, University of Zaragoza) 
Abstract:  We show that the use of generalized least squares (GLS) detrending procedures leads to important empirical power gains compared to ordinary least squares (OLS) detrend ing method when testing the null hypothesis of unit root for bounded processes. The noncentrality parameter that is used in the GLSdetrending depends on the bounds, so that improvements on the statistical inference are to be expected if a casespecific parameter is used. This initial hypothesis is supported by the simulation experiment that has been conducted. 
Keywords:  Unit root, bounded process, quasi GLSdetrending. JEL classification: C12, C22 
Date:  2013–04 
URL:  http://d.repec.org/n?u=RePEc:aqr:wpaper:201302&r=ecm 
By:  Yuanhua Feng (University of Paderborn); Chen Zhou (University of Paderborn) 
Abstract:  This paper discusses forecasting of long memory and a nonparametric scale function in nonnegative financial processes based on a fractionally integrated LogACD (FILogACD) and its semiparametric extension (SemiFILogACD). Necessary and sufficient conditions for the existence of a stationary solution of the FILogACD are obtained. Properties of this model under lognormal assumption are summarized. A linear predictor based on the truncated AR(oo) form of the logarithmic process is proposed. It is shown that this proposal is an approximately best linear predictor. Approximate variances of the prediction errors for an individual observation and for the conditional mean are obtained. Forecasting intervals for these quantities in the log and the original processes are calculated under lognormal assumption. The proposals are applied to forecasting daily trading volumes and daily trading numbers in financial market. 
Keywords:  Approximately best linear predictor, FILogACD, financial forecasting, long memory time series, nonparametric methods, SemiFILogACD 
Date:  2013–04 
URL:  http://d.repec.org/n?u=RePEc:pdn:wpaper:59&r=ecm 
By:  Atwood, Joseph A.; Bittinger, Alison; Smith, Vincent H. 
Abstract:  We demonstrate that Theiltype variance corrections are required to obtain consistent marginal effect estimates in NelsonOlsen's twostage limited dependent variable (2SLDV) model. As Theil's residualsbased corrections are infeasible with 2SLDV, we present variance correction procedures shown to be virtually equivalent to Theil’s 2SLS corrections for continuous models but that are implementable in 2SLDV models. Simulations demonstrate that the proposed variance correction procedures generate consistent marginal effect estimates. The effects of the correction procedures are illustrated in a study of technology adoption by Ethiopian farmers. Components of the variance correction procedures should prove useful in other applications involving limited dependent variables. 
Keywords:  Simultaneous Equation Model, Limited Dependent Variable, Discrete Choice, Theil Correction, Research Methods/ Statistical Methods, 
Date:  2013–03 
URL:  http://d.repec.org/n?u=RePEc:ags:umaesp:146790&r=ecm 
By:  Cerqueti, Roy; Falbo, Paolo; Pelizzari, Cristian 
Abstract:  Markov chain theory is proving to be a powerful approach to bootstrap highly nonlinear time series. In this work we provide a method to estimate the memory of a Markov chain (i.e. its order) and to identify its relevant states. In particular the choice of memory lags and the aggregation of irrelevant states are obtained by looking for regularities in the transition probabilities. Our approach is based on an optimization model. More specifically we consider two competing objectives that a researcher will in general pursue when dealing with bootstrapping: preserving the “structural” similarity between the original and the simulated series and assuring a controlled diversification of the latter. A discussion based on information theory is developed to define the desirable properties for such optimal criteria. Two numerical tests are developed to verify the effectiveness of the method proposed here. 
Keywords:  Bootstrapping; Information Theory; Markov chains; Optimization; Simulation. 
JEL:  C15 C61 C63 C65 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:46250&r=ecm 
By:  Martin M. Andreasen (Aarhus University and CREATES); Jesús FernándezVillaverde (University of Pennsylvania, FEDEA, NBER, and CEPR); Juan F. RubioRamírez (Duke University, Federal Reserve Bank of Atlanta, and FEDEA) 
Abstract:  This paper studies the pruned statespace system for higherorder approximations to the solutions of DSGE models. For second and thirdorder approximations, we derive the statistical properties of this system and provide closedform expressions for ?first and second unconditional moments and impulse response functions. Thus, our analysis introduces GMM estimation for DSGE models approximated up to thirdorder and provides the foundation for indirect inference and SMM when simulation is required. We illustrate the usefulness of our approach by estimating a New Keynesian model with habits and EpsteinZin preferences by GMM when using ?rst and second unconditional moments of macroeconomic and ?nancial data and by SMM when using additional third and fourth unconditional moments and nonGaussian innovations. 
JEL:  C15 C53 E30 
Date:  2013–11–04 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201312&r=ecm 
By:  Wolfgang Polasek (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria and University of Porto, Portugal) 
Abstract:  Growth rate data that are collected incompletely in crosssections is a quite frequent problem. Chow and Lin (1971) have developed a method for predicting unobserved disaggregated time series and we propose an extension of the procedure for completing crosssectional growth rates similar to the spatial ChowLin method of Liano et al. (2009). Disaggregated growth rates cannot be predicted directly and requires a system estimation of two ChowLin prediction models, where we compare classical and Bayesian estimation and prediction methods. We demonstrate the procedure for Spanish regional GDP growth rates between 2000 and 2004 at a NUTS3 level. We evaluate the growth rate forecasts by accuracy criteria, because for the Spanish dataset we can compare the predicted with the observed values. 
Keywords:  Interpolation, missing disaggregated values in spatial econometrics, MCMC, Spatial ChowLin methods, predicting growth rates data, spatial autoregression (SAR), forecast evaluation, outliers 
JEL:  C11 C15 C52 E17 R12 
Date:  2013–04 
URL:  http://d.repec.org/n?u=RePEc:ihs:ihsesp:295&r=ecm 
By:  Garland Durham (Quantos Analytics, LLC); John Geweke (Economics Discipline Group, University of Technology, Sydney) 
Abstract:  Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially quite advantageous ways. But to fully exploit these benfits algorithms that conform to parallel computing environments are needed. Sequential Monte Carlo comes very close to this ideal whereas other approaches like Markov chain Monte Carlo do not. This paper presents a sequential posterior simulator well suited to this computing environment. The simulator makes fewer analytical and programming demands on investigators, and is faster, more reliable and more complete than conventional posterior simulators. The paper extends existing sequential Monte Carlo methods and theory to provide a thorough and practical foundation for sequential posterior simulation that is well suited to massively parallel computing environments. It provides detailed recommendations on implementation, yielding an algorithm that requires only code for simulation from the prior and evaluation of prior and data densities and works well in a variety of applications representative of serious empirical work in economics and finance. The algorithm is robust to pathological posterior distributions, generates accurate marginal likelihood approximations, and provides estimates of numerical standard error and relative numerical efficiency intrinsically. The paper concludes with an application that illustrates the potential of these simulators for applied Bayesian inference. 
Keywords:  Graphics processing unit; particle filter; posterior simulation; sequential Monte Carlo; single instruction multiple data 
JEL:  C11 C63 
Date:  2013–04–01 
URL:  http://d.repec.org/n?u=RePEc:uts:ecowps:9&r=ecm 
By:  Masuhara, Hiroaki 
Abstract:  Background: In duration analysis, we find situations where covariates are simultaneously determined along with the duration variable. Moreover, although the models based on a hazard rate do not explicitly assume heterogeneity, in applied econometrics, the possibility of omitted variables is inevitable and controlling population heterogeneity alone is inadequate. It is important to consider both heterogeneity and endogeneity in duration analysis. Objectives and methods: Explicitly assuming semiparametric correlated heterogeneity, this paper proposes an alternative robust duration model with an endogenous binary variable that generalizes the heterogeneity of both duration and endogeneity using Hermite polynomials. Under these setups, we investigate the difference between the endogenous binary variable's coefficients of the parametric and semiparametric models using the Medical Expenditure Panel Survey (MEPS) data. Results: The parameter values of the endogenous binary variable (insurance choice) are statistically significant at the 1% level; however, the values differ among the parametric and semiparametric models and the any type of insurance choice increases the length of hospital stays by 104.010% in the censored parametric model, and 182.074% in the censored semiparametric model. Compared with the parametric model, the increase of hospital stays in the semiparametric model is large. Moreover, we find that the semiparametric model a twinpeak distribution and that the contour lines differ from the usual ellipsoids of the bivariate normal density. Conclusions: When applied to the duration of hospital stays of the MEPS data, the estimated results of the semiparametric model shows a good performance. The absolute values of the endogenous binary regressor coefficients of the semiparametric models are larger than that of the parametric model. The parametric model underestimates the effect of the individual's insurance choice in our example. Moreover, the estimated densities of the semiparametric models have twin peak distribution. 
Keywords:  Endogenous switching, duration analysis, probit, seminonparametric model, heterogeneity 
JEL:  C14 C31 C34 
Date:  2013–03 
URL:  http://d.repec.org/n?u=RePEc:hit:cisdps:597&r=ecm 
By:  El Montasser, Ghassen; Boufateh, Talel; Issaoui, Fakhri 
Abstract:  This paper shows through a Monte Carlo analysis the effect of neglecting seasonal deterministics on the seasonal KPSS test. We found that the test is most of the time heavily oversized and not convergent in this case. In addition, Bartletttype nonparametric correction of error variances did not signally change the test's rejection frequencies. 
Keywords:  Deterministic seasonality, Seasonal KPSS Test, Monte Carlo Simulations. 
JEL:  C22 C53 
Date:  2013–04–15 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:46226&r=ecm 
By:  Sam SchulhoferWohl (Federal Reserve Bank of Minneapolis) 
Abstract:  The standard approach to estimating structural parameters in lifecycle models imposes sufficient assumptions on the data to identify the "age profile" of outcomes, then chooses model parameters so that the model's age profile matches this empirical age profile. I show that the standard approach is both incorrect and unnecessary: incorrect, because it generically produces inconsistent estimators of the structural parameters, and unnecessary, because consistent estimators can be obtained under weaker fewer assumptions. I derive an identification method that avoids the problems of the standard approach and illustrate its benefits in a simple model of consumption inequality. 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:red:sed012:575&r=ecm 
By:  Joseph W. Sakshaug; Trivellore E. Raghunathan 
Abstract:  Small area estimates provide a critical source of information used to study local populations. Statistical agencies regularly collect data from small areas but are prevented from releasing detailed geographical identifiers in publicuse data sets due to disclosure concerns. Alternative data dissemination methods used in practice include releasing summary/aggregate tables, suppressing detailed geographic information in publicuse data sets, and accessing restricted data via Research Data Centers. This research examines an alternative method for disseminating microdata that contains more geographical details than are currently being released in publicuse data files. Specifically, the method replaces the observed survey values with imputed, or synthetic, values simulated from a hierarchical Bayesian model. Confidentiality protection is enhanced because no actual values are released. The method is demonstrated using restricted data from the 20052009 American Community Survey. The analytic validity of the synthetic data is assessed by comparing small area estimates obtained from the synthetic data with those obtained from the observed data. 
Date:  2013–04 
URL:  http://d.repec.org/n?u=RePEc:cen:wpaper:1319&r=ecm 
By:  Sloczynski, Tymon (Warsaw School of Economics) 
Abstract:  In this paper I develop a consistent estimator of the population average treatment effect (PATE) which is based on a nonstandard version of the Oaxaca–Blinder decomposition. As a result, I extend the recent literature which has utilized the treatment effects framework to reinterpret this technique, and propose an alternative solution to its fundamental problem of comparison group choice. I also use the Oaxaca–Blinder decomposition and its semiparametric extension to decompose gender wage differentials with the UK Labour Force Survey (LFS) data, while providing separate estimates of the average gender effect on men, women, and the whole population. 
Keywords:  gender wage gaps, decomposition methods, treatment effects 
JEL:  C21 J31 J71 
Date:  2013–03 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp7315&r=ecm 