|
on Econometrics |
By: | Chen, Song Xi; Guo, Bin |
Abstract: | We consider testing regression coefficients in high dimensional generalized linear models. By modifying a test statistic proposed by Goeman et al. (2011) for large but fixed dimensional settings, we propose a new test which is applicable for diverging dimension and is robust for a wide range of link functions. The power properties of the tests are evaluated under the setting of the local and fixed alternatives. A test in the presence of nuisance parameters is also proposed. The proposed tests can provide p-values for testing significance of multiple gene-sets, whose usefulness is demonstrated in a case study on an acute lymphoblastic leukemia dataset. |
Keywords: | Generalized Linear Model; Gene-Sets; High Dimensional Covariate; Nuisance Parameter; U-statistics. |
JEL: | C3 C30 C4 C5 |
Date: | 2014 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:59816&r=ecm |
By: | Wang, Luya; Li, Kunpeng; Wang, Zhengwei |
Abstract: | This paper considers the problem of estimating a simultaneous spatial autoregressive model (SSAR). We propose using the quasi maximum likelihood method to estimate the model. The asymptotic properties of the maximum likelihood estimator including consistency and limiting distribution are investigated. We also run Monte Carlo simulations to examine the finite sample performance of the maximum likelihood estimator. |
Keywords: | Simultaneous equations model, Spatial autoregressive model, Maximum likelihood estimation, Asymptotic theory. |
JEL: | C31 |
Date: | 2014–11 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:59901&r=ecm |
By: | Francis J. DiTraglia (Department of Economics, University of Pennsylvania) |
Abstract: | In finite samples, the use of a slightly endogenous but highly relevant instrument can reduce mean-squared error (MSE). Building on this observation, I propose a moment selection criterion for GMM in which moment conditions are chosen based on the MSE of their associated estimators rather than their validity: the focused moment selection criterion (FMSC). I then show how the framework used to derive the FMSC can address the problem of inference post-moment selection. Treating post-selection estimators as a special case of moment-averaging, in which estimators based on different moment sets are given data-dependent weights, I propose a simulation-based procedure to construct valid confidence intervals for a variety of formal and informal moment-selection procedures. Both the FMSC and confidence interval procedure perform well in simulations. I conclude with an empirical example examining the effect of instrument selection on the estimated relationship between malaria transmission and per-capita income. |
Keywords: | Moment selection, GMM estimation, Model averaging, Focused Information Criterion, Post-selection estimators |
JEL: | C21 C26 C52 |
Date: | 2011–11–09 |
URL: | http://d.repec.org/n?u=RePEc:pen:papers:14-037&r=ecm |
By: | Mangold, Benedikt |
Abstract: | The problem of selecting a prior distribution when it comes to Bayes estimation often constitutes a choice between conjugate or noninformative priors, since in both cases the resulting posterior Bayes estimator (PBE) can be solved analytically and is therefore easy to calculate. Nevertheless, some of the implicit assumptions made by choosing a certain prior can be difficult to justify when a concrete sample of small size has been drawn. For example, when the underlying distribution is assumed to be normal, there is no reason to expect that the true but unknown location parameter is located outside the range of the sample. So why should a distribution with a non-compact domain be used as a prior for the mean? In addition, if there is some skewness in a sample of small size due to outliers when a symmetric distribution is assumed, this finding can be used to correct the PBE when determining the hyperparameters. Both ideas are applied to an empirical Bayes approach called plausible prior estimation (PPE) in the case of estimating the mean of a normal distribution with known variance in the presence of outliers. We propose an approach for choosing a prior and its respective hyperparameters, taking into account the above mentioned considerations. The resulting influence function as a frequentistic measure of robustness is simulated. To conclude, several simulation studies have been carried out to analyze the frequentistic performance of the PPE in comparison to frequentistic and Bayes estimators in certain outlier scenarios. |
Keywords: | Bayes Statistics,Objective Prior,Robustness |
Date: | 2014 |
URL: | http://d.repec.org/n?u=RePEc:zbw:iwqwdp:092014&r=ecm |
By: | Jozef Baruník (Institute of Economic Studies, Faculty of Social Sciences, Charles University in Prague, Smetanovo nábreží 6, 111 01 Prague 1, Czech Republic; Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Pod Vodarenskou Vezi 4, 182 00, Prague, Czech Republic); Lucie Kraicová (Institute of Economic Studies, Faculty of Social Sciences, Charles University in Prague, Smetanovo nábreží 6, 111 01 Prague 1, Czech Republic) |
Abstract: | In this work we focus on the application of wavelet-based methods in volatility modeling. We introduce a new, wavelet-based estimator (wavelet Whittle estimator) of a FIEGARCH model, ARCH-family model capturing long-memory and asymmetry in volatility, and study its properties. Based on an extensive Monte Carlo experiment, both the behavior of the new estimator in various situations and its relative performance with respect to two more traditional estimators (maximum likelihood estimator and Fourier-based Whittle estimator) are assessed, along with practical aspects of its application. Possible solutions are proposed for most of the issues detected, including suggestion of a new specication of the estimator. This uses maximal overlap discrete wavelet transform, which improves the estimator perfor- mance, as we show in the experiment extension. Next, we study all the estimators in case of a FIEGARCH-Jump model, which brings interesting insights to their mechanism. We conclude that, after optimization of the estimation setup, the wavelet-based estimator may become an attractive robust alternative to the traditional methods |
Keywords: | volatility, long memory, FIEGARCH, wavelets, Whittle, Monte Carlo |
JEL: | C13 C18 C51 G17 |
Date: | 2014–09 |
URL: | http://d.repec.org/n?u=RePEc:fau:wpaper:wp2014_33&r=ecm |
By: | Bel, K.; Fok, D.; Paap, R. |
Abstract: | __Abstract__ The multivariate choice problem with correlated binary choices is investigated. The Multivariate Logit [MVL] model is a convenient model to describe such choices as it provides a closed-form likelihood function. The disadvantage of the MVL model is that the computation time required for the calculation of choice probabilities increases exponentially with the number of binary choices under consideration. This makes maximum likelihood-based estimation infeasible in case there are many binary choices. To solve this issue we propose three novel estimation methods which are much easier to obtain, show little loss in efficiency and still perform similar to the standard Maximum Likelihood approach in terms of small sample bias. These three methods are based on (i) stratified importance sampling, (ii) composite conditional likelihood, and (iii) generalized method of moments. Monte Carlo results show that the gain in computation time in the Composite Conditional Likelihood estimation approach is large and convincingly outweighs the limited loss in efficiency. This estimation approach makes it feasible to straightforwardly apply the MVL model in practical cases where the number of studied binary choices is large. |
Keywords: | Multivariate Logit model, Stratified Importance Sampling, Composite Likelihood, Generalized Method of Moments |
Date: | 2014–10–20 |
URL: | http://d.repec.org/n?u=RePEc:ems:eureir:77167&r=ecm |
By: | Shonosuke Sugasawa (Graduate School of Economics, University of Tokyo); Tatsuya Kubokawa (Faculty of Economics, The University of Tokyo) |
Abstract: | This paper is concerned with the prediction of the conditional mean which involves the fixed and random effects based on the natural exponential family with a quadratic variance function. The best predictor is interpreted as the Bayes estimator in the Bayesian context, and the empirical Bayes estimator (EB) is useful for small area estimation in the sense of increasing precision of prediction for small area means. When data of the small area of interest are observed and one wants to know the prediction error of the EB based on the data, the conditional mean squared error (cMSE) given the data is used instead of the conventional unconditional MSE. The difference between the two kinds of MSEs is small and appears in the second-order terms in the classical normal theory mixed model. However, it is shown that the difference appears in the first-order or leading terms for distributions far from normality. Especially, the leading term in the cMSE is a quadratic concave function of the direct estimate in the small area for the binomial-beta mixed model, and an increasing function for the the Poisson-gamma mixed model, while the leading terms in the unconditional MSEs are constants for the two mixed models. Second-order unbiased estimators of the cMSE are provided in two ways based on the analytical and parametric bootstrap methods. Finally, the performances of the EB and the estimator of cMSE are examined through simulation and empirical studies. |
Date: | 2014–06 |
URL: | http://d.repec.org/n?u=RePEc:tky:fseres:2014cf934&r=ecm |
By: | Jozef Baruník (Institute of Economic Studies, Charles University, Opletalova 26, 110 00, Prague, CR and Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Pod Vodarenskou Vezi 4, 182 00, Prague, Czech Republic.); František Èech (Institute of Economic Studies, Charles University, Opletalova 26, 110 00, Prague, CR and Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Pod Vodarenskou Vezi 4, 182 00, Prague, Czech Republic.) |
Abstract: | We introduce a methodology for dynamic modelling and forecasting of realized covariance matrices based on generalization of the heterogeneous autoregressive model (HAR) for realized volatility. Multivariate extensions of popular HAR framework leave substantial information unmodeled in residuals. We propose to employ a system of seemingly unrelated regressions to capture the information. The newly proposed generalized heterogeneous autoregressive (GHAR) model is tested against natural competing models. In order to show the economic and statistical gains of the GHAR model, portfolio of various sizes is used. We find that our modeling strategy outperforms competing approaches in terms of statistical precision, and provides economic gains in terms of mean-variance trade-o . Additionally, our results provide a comprehensive comparison of the performance when realized covariance and more ecient, noise-robust multivariate realized kernel estimator, is used. We study the contribution of both estimators across di erent sampling frequencies, and we show that the multivariate realized kernel estimator delivers further gains compared to realized covariance estimated on higher frequencies. |
Keywords: | GHAR, portfolio optimisation, economic evaluation |
JEL: | C18 C58 G15 |
Date: | 2014–08 |
URL: | http://d.repec.org/n?u=RePEc:fau:wpaper:wp2014_23&r=ecm |
By: | Zapata, Samuel D.; Carpio, Carlos E. |
Abstract: | The Turnbull method is the standard approach used in contingent valuation studies to estimate willingness to pay (WTP) models using discrete responses without making assumptions about the distribution of the data. However, this approach has several limitations. The purpose of this study is to develop alternative distribution-free methods for the estimation of WTP models using nonparametric conditional imputation and local regression procedures. The proposed approaches encompass the recovery of the individuals’ WTP values using an iterated conditional expectation procedure and subsequent estimation of the mean WTP using linear and nonparametric additive models. In contrast to the Turnbull approach, the proposed estimation methods allow the inclusion of covariates in the modeling of WTP estimates, as well as the complete recovery of its underlying probability distribution. Monte Carlo simulations are employed to compare the performance of the proposed estimators with that of the Turnbull estimator. We also illustrate the use of the proposed estimation techniques using a real data set. |
Keywords: | Additive models, double-bounded elicitation, kernel functions, iterated conditional expectation, non-parametric regression, Turnbull estimator., Research Methods/ Statistical Methods, |
Date: | 2014–05 |
URL: | http://d.repec.org/n?u=RePEc:ags:aaea14:170453&r=ecm |
By: | Josué M. Polanco-Martínez; Sérgio H. Faria |
Abstract: | Here we present some preliminary results of a statistical–computational implementation to estimate the wavelet spectrum of unevenly spaced paleoclimate time series by means of the Morlet Weighted Wavelet Z-Transform (MWWZ). A statistical significance test is performed against an ensemble of first-order auto-regressive models (AR1) by means of Monte Carlo simulations. In order to demonstrate the capabilities of this implementation, we apply it to the oxygen isotope ratio (δ18O) data of the GISP2 deep ice core (Greenland). |
Keywords: | wavelet spectral analysis, continuous wavelet transform, Morlet Weighted Wavelet Z-Transform, unvenly spaced paleoclimate time series, non-stationarity, multi-scale phenomena. |
Date: | 2014–10 |
URL: | http://d.repec.org/n?u=RePEc:bcc:wpaper:2014-07&r=ecm |
By: | Chao, Wang; Richard, Gerlach |
Abstract: | The realized GARCH framework is extended to incorporate the realized range, and the intra-day range, as potentially more efficient series of information than re- alized variance or daily returns, for the purpose of volatility and tail risk forecasting in a financial time series. A Bayesian adaptive Markov chain Monte Carlo method is employed for estimation and forecasting. Compared to a range of well known parametric GARCH models, predictive log-likelihood results across six market in- dex return series favor the realized GARCH models incorporating the realized range. Further, these same models also compare favourably for tail risk forecasting, both during and after the global financial crisis. |
Keywords: | Tail Risk Forecasting; Predictive Likelihood; Realized GARCH; Realized Variance; Intra-day Range; Realized Range |
Date: | 2014–11–07 |
URL: | http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/12235&r=ecm |
By: | Hac\`ene Djellout; Arnaud Guillin; Yacouba Samoura |
Abstract: | Realized statistics based on high frequency returns have become very popular in financial economics. In recent years, different non-parametric estimators of the variation of a log-price process have appeared. These were developed by many authors and were motivated by the existence of complete records of price data. Among them are the realized quadratic (co-)variation which is perhaps the most well known example, providing a consistent estimator of the integrated (co-)volatility when the logarithmic price process is continuous. Limit results such as the weak law of large numbers or the central limit theorem have been proved in different contexts. In this paper, we propose to study the large deviation properties of realized (co-)volatility (i.e., when the number of high frequency observations in a fixed time interval increases to infinity. More specifically, we consider a bivariate model with synchronous observation schemes and correlated Brownian motions of the following form: $dX\_{\ell,t} = \sigma\_{\ell,t}dB\_{\ell,t}+b\_{\ell}(t,\omega)dt$ for $\ell=1,2$, where $X\_{\ell}$ denotes the log-price, we are concerned with the large deviation estimation of the vector $V\_t^n(X)=(Q\_{1,t}^n(X), Q\_{2,t}^n(X), C\_{t}^n(X))$ where $Q\_{\ell,t}^n(X)$ and $C\_{t}^n(X)$ represente the estimator of the quadratic variational processes $Q\_{\ell,t}=\int\_0^t\sigma\_{\ell,s}^2ds$ and the integrated covariance $C\_t=\int\_0^t\sigma\_{1,s}\sigma\_{2,s}\rho\_sds$ respectively, with $\rho\_t=cov(B\_{1,t}, B\_{2,t})$. Our main motivation is to improve upon the existing limit theorems. Our large deviations results can be used to evaluate and approximate tail probabilities of realized (co-)volatility. As an application we provide the large deviation for the standard dependence measures between the two assets returns such as the realized regression coefficients up to time $t$, or the realized correlation. Our study should contribute to the recent trend of research on the (co-)variance estimation problems, which are quite often discussed in high-frequency financial data analysis. |
Date: | 2014–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1411.5159&r=ecm |
By: | Harald Badinger (Department of Economics, Vienna University of Economics and Business); Jesus Crespo Cuaresma (Department of Economics, Vienna University of Economics and Business) |
Abstract: | This paper considers alternative methods to estimate econometric models based on bilateral data when only aggregate information on the dependent variable is available. Such methods can be used to obtain an indication of the sign and magnitude of bilateral model parameters and, more importantly, to decompose aggregate into bilateral data, which can then be used as proxy variables in further empirical analysis. We perform a Monte Carlo study and carry out a simple real world application using intra-EU trade and capital flows, showing that the methods considered work reasonably well and are worthwhile being considered in the absence of bilateral data. |
Keywords: | Aggregation, gravity equations |
JEL: | C13 F14 F17 |
Date: | 2014–09 |
URL: | http://d.repec.org/n?u=RePEc:wiw:wiwwuw:wuwp183&r=ecm |
By: | Hyeongwoo Kim; Jintae Kim |
Abstract: | This paper revisits empirical evidence of mean reversion of relative stock prices in international stock markets. We implement a strand of univariate and panel unit root tests for linear and nonlinear models of 18 national stock indices during the period 1969 to 2012. Our major findings are as follows. First, we find little evidence of linear mean reversion irrespective of the choice of a reference country. Employing panel tests yields the same conclusion once the cross-section dependence is controlled. Second, we find strong evidence of nonlinear mean reversion when the UK serves as a reference country, calling attention to the stock index in the UK. Choosing the US as a reference yields very weak evidence of nonlinear stationarity. Third, via extensive Monte Carlo simulations, we demonstrate a potential pitfall in using panel unit root tests with cross-section dependence when a stationary common factor dominates nonstationary idiosyncratic components in small samples. |
Keywords: | Unit Root Test; Exponential Smooth Transition Autoregressive (ESTAR) Unit Root Test; Nonlinear Panel unit root test; Panel Analysis of Nonstationarity in Idiosyncratic and Common Components (PANIC) |
JEL: | C22 G10 G15 |
Date: | 2014–11 |
URL: | http://d.repec.org/n?u=RePEc:abn:wpaper:auwp2014-13&r=ecm |
By: | Erika Gomes-Goncalves (UC3M); Henryk Gzyl (IESA); Silvia Mayoral (UC3M) |
Abstract: | Here we present an application of two maxentropic procedures to determine the probability density distribution of compound sums of random variables, using only a finite number of empirically determined fractional moments. The two methods are the Standard method of Maximum Entropy (SME), and the method of Maximum Entropy in the Mean (MEM). We shall verify that the reconstructions obtained satisfy a variety of statistical quality criteria, and provide good estimations of VaR and TVaR, which are important measures for risk management purposes. We analyze the performance and robustness of these two procedures in several numerical examples, in which the frequency of losses is Poisson and the individual losses are lognormal random variables. As side product of the work, we obtain a rather accurate description of the density of the compound random variable. This is an extension of a previous application based on the Standard Maximum Entropy approach (SME) where the analytic form of the Laplace transform was available to a case in which only observed or simulated data is used. These approaches are also used to develop a procedure to determine the distribution of the individual losses through the knowledge of the total loss. Then, in the case of having only historical total losses, it is possible to decompound or disaggregate the random sums in its frequency/severity distributions, through a probabilistic inverse problem. |
Date: | 2014–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1411.5625&r=ecm |
By: | Bacci, Silvia; Bartolucci, Francesco; Pigini, Claudia; Signorelli, Marcello |
Abstract: | We propose a finite mixture latent trajectory model to study the behavior of firms in terms of open-ended employment contracts that are activated and terminated during a certain period. The model is based on the assumption that the population of firms is composed by unobservable clusters (or latent classes) with a homogeneous time trend in the number of hirings and separations. Our proposal also accounts for the presence of informative drop-out due to the exit of a firm from the market. Parameter estimation is based on the maximum likelihood method, which is efficiently performed through an EM algorithm. The model is applied to data coming from the Compulsory Communication dataset of the local labor office of the province of Perugia (Italy) for the period 2009-2012. The application reveals the presence of six latent classes of firms. |
Keywords: | Finite mixture models, Latent trajectory model, Compulsory communications, hirings and separations |
JEL: | C33 C49 J63 |
Date: | 2014–11 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:59730&r=ecm |
By: | Wu, Feng; Guan, Zhengfei |
Abstract: | Recent development in production risk analyses has raised questions on the conventional approaches to estimating risk preferences. This study proposes to identify the risk separately from input equations with a seminonparametric estimator. The approach circumvents the issue of arbitrary risk specifications. Meanwhile, it facilitates analytical derivation of input equations. The GMM estimation method is then applied to input equations to estimate risk preferences. The procedure is validated by a Monte Carlo experiment. Simulation results show that the proposed method provides a consistent estimator and significantly improves estimation efficiency. |
Keywords: | Risk Preferences, GMM, Simulations, Seminonparametric Estimator, Estimation Efficiency, Production Economics, Risk and Uncertainty, C14, Q12, |
Date: | 2014 |
URL: | http://d.repec.org/n?u=RePEc:ags:aaea14:170625&r=ecm |
By: | Minegishi, Kota |
Abstract: | A method is developed to integrate the efficiency concepts of technical, allocative, and scale inefficiencies (TI, AI, SI) into the variable returns to scale (VRS) frontier approximation in Data Envelopment Analysis (DEA). The proposed weighted DEA (WDEA) approach takes a weighted average of the profit, constant returns to scale (CRS), and VRS frontiers, so that the technical feasibility of a VRS frontier is extended toward scale- and allocatively-efficient decisions. A weight selection rule is constructed based on the empirical performance of the VRS estimator via the local confidence interval of Kneip, Simar, and Wilson (2008). The resulting WDEA frontier is consistent and more efficient than the VRS frontier under the maintained properties of a data generating process. The potential estimation efficiency gain arises from exploiting sample correlations among TI, AI, and SI. Application to Maryland dairy production data finds that technical efficiency is on average 5.2% to 7.8% lower under the WDEA results than under the VRS counterparts. |
Keywords: | Data Envelopment Analysis, Technical Efficiency, Allocative Efficiency, Scale Efficiency, Agricultural Economics, Livestock Production/Industries, Production Economics, Productivity Analysis, D22, Q12, C44, |
Date: | 2014 |
URL: | http://d.repec.org/n?u=RePEc:ags:aaea14:170277&r=ecm |
By: | Parman, Bryon; Featherstone, Allen; Amanor-Boadu, Vincent |
Abstract: | This research examines the robustness of four different estimation approaches to evaluate their ability to estimate a “true” cost frontier and associated economic measures. The manuscript evaluates three parametric methods including a two-sided error system, OLS with only positive errors, and the stochastic frontier method. The fourth method is the nonparametric DEA method augmented to calculate multi-product and product-specific economies of scale. The robustness of the four estimation methods is examined using simulated data sets from two different distributions and two different observation quantity levels. The theoretical condition of curvature for the estimated cost functions was checked for the input price, and output quantity matrices. Calculation of the Eigenvalues revealed that all three parametric estimation methods violated curvature of either the price or quantity matrix, or both. Calculation of the estimated economic efficiency measures shows the parametric methods to be susceptible to distributional assumptions. However, the DEA method in all three simulations is fairly robust in estimating the “true” cost frontier and associated economic measures while maintaining curvature of the cost function. |
Keywords: | Production, Productivity Analysis, Data Envelopment Analysis, Frontier Analysis, Agribusiness, Farm Management, Production Economics, Productivity Analysis, |
Date: | 2014 |
URL: | http://d.repec.org/n?u=RePEc:ags:aaea14:169877&r=ecm |
By: | William A. Barnett; Marcelle Chauvet; Danilo Leiva-Leon |
Abstract: | This paper provides a framework for the early assessment of current U.S. nominal GDP growth, which has been considered a potential new monetary policy target. The nowcasts are computed using the exact amount of information that policy-makers have available at the time predictions are made. However, real-time information arrives at different frequencies and asynchronously, which poses challenges of mixed frequencies, missing data and ragged edges. This paper proposes a multivariate state-space model that not only takes into account asynchronous information inflow, but also allows for potential parameter instability. We use small-scale confirmatory factor analysis in which the candidate variables are selected based on their ability to forecast nominal GDP. The model is fully estimated in one step using a non-linear Kalman filter, which is applied to obtain optimal inferences simultaneously on both the dynamic factor and parameters. In contrast to principal component analysis, the proposed factor model captures the co-movement rather than the variance underlying the variables. We compare the predictive ability of the model with other univariate and multivariate specifications. The results indicate that the proposed model containing information on real economic activity, inflation, interest rates and Divisia monetary aggregates produces the most accurate real-time nowcasts of nominal GDP growth. |
Keywords: | Business fluctuations and cycles, Econometric and statistical methods, Inflation and prices |
JEL: | C32 E27 E31 E32 |
Date: | 2014 |
URL: | http://d.repec.org/n?u=RePEc:bca:bocawp:14-39&r=ecm |
By: | Bresson G.; Etienne J.; Mohnen P. (UNU-MERIT) |
Abstract: | This paper proposes a Bayesian approach to estimate a factor augmented productivity equation. We exploit the panel dimension of our data and distinguish individual-specific and time-specific factors. On the basis of 21 technology, infrastructure and institution indicators from 82 countries over a 19-year period 1990 to 2008, we construct summary indicators of these three components and estimate their effect on the growth and the international differences in GDP per capita. |
Keywords: | Single Equation Models; Single Variables: Models with Panel Data; Longitudinal Data; Spatial Time Series; Multiple or Simultaneous Equation Models: Classification Methods; Cluster Analysis; Factor Models; Measurement of Economic Growth; Aggregate Productivity; Cross-Country Output Convergence; |
JEL: | C23 C38 O47 |
Date: | 2014 |
URL: | http://d.repec.org/n?u=RePEc:unm:unumer:2014052&r=ecm |