Home

# Feature Selection

Feature Selection (.pdf)

Feature selection (also known as subset selection) is a process commonly used in machine learning, wherein a subset of the features available from the data are selected for application of a learning algorithm. The best subset contains the least number of dimensions that most contribute to accuracy; we discard the remaining, unimportant dimensions. This is an important stage of pre-processing and is one of two ways of avoiding the curse of dimensionality (the other is feature extraction). There are two approaches: \begin{description} \item [forward selection] Start with no variables and add them one by one, at each step adding the one that decreases the error the most, until any further addition does not significantly decrease the error. \item [backward selection] Start with all the variables and remove them one by one, at each step removing the one that decreases the error the most (or increases it only slightly), until any further removal increases the error significantly. \end{description} To reduce overfitting, the error referred to above is the error on a validation set that is distinct from the training set.

"There are two main methods for reducing dimensionality: feature selection and feature extraction. In feature selection, we are interested in finding k of the d dimensions that give us the most information and we discard the other (d - k) dimensions. We are going to discuss subset delection as a feature selection method.
[...]
In subset selection, we are interested in finding the best subset of the set of features. The best subset contains the least number of dimensions that most contribute to accuracy. We discard the remaining, unimportant dimensions. Using a suitable error function, this can be used in both regression and classification problems. There are 2d possible subsets of d variables, but we cannot test for all of them unless d is small and we employ heuristics to get a reasonable (but not optimal) solution in reasonable (polynomial) time.
There are two approaches: In forward selection, we start with no variables and add them one by one, at each step adding the one that decreases the error the most, until any further addition does not decrease the error (or decreases it only sightly (sic)). In backward selection, we start with all variables and remove them one by one, at each step removing the one that decreases the error the most (or increases it only slightly), until any further removal increases the error significantly. In either case, checking the error should be done on a validation set distinct from the training set because we want to test the generalization accuracy. With more features, generally we have lower training error, but not necessarily lower validation error.
[...]"
Alpaydin (2004), p 106

"An important issue that often confronts data miners in practice is the problem of having too many variables. Simply put, not all variables that are measured are likely to be necessary for accurate discrimination and including them in the classification model may in fact lead to a worse model than if they were removed. Consider the simple example of building a system to discriminate between images of male and female faces (a task that humans perform effortlessly and relatively accurately but that is quite challenging for an image classification algorithm), The colors of a person's eyes, hair, or skin are hardly likely to be useful in this discriminative context. These are variables that are easy to measure (and indeed are general characteristics of a person's appearance) but carry little information as to the class identity in this particular case."
Hand, Mannila and Smyth (2001), p 362

"One of the central issues in induction concerns the selection of useful features. Although most learning methods attempt to either select attributes or assign them degrees of importance, both theoretical analyses and experimental studies indicate that many algorithms scale poorly to domains with large numbers of irrelevant features. For example, the number of training cases needed for simple nearest neighbor [...] to reach a given level of accuracy appears to grow exponentially with the number of irrelevant features, independent of the target concept. Even methods for inducing univariate decision trees, which explicitly select some attributes in favor of others, exhibit this behavior for some target concepts. And some techniques, like the naive Bayesian classifier [...], can be very sensitive to domains with correlated attributes. This suggests the need for additional methods to select a useful subset of features when many are available."
Langley (1996), p 233, p 253

"Feature selection, also known as subset selection or variable selection, is a process commonly used in machine learning, wherein a subset of the features available from the data are selected for application of a learning algorithm. Feature selection is necessary either because it is computationally infeasible to use all available features, or because of problems of estimation when limited data samples (but a large number of features) are present. The latter problem is related to the so-called curse of dimensionality."
Wikipedia (2006)

## Most Cited

• MILLER, Alan, 2002. Subset Selection in Regression. books.google.com. [Cited by 489] (114.65/year)
• @book{Miller02, author = {Alan Miller}, editor = {}, title = {Subset Selection in Regression}, chapter = {}, pages = {}, publisher = {Chapman \& Hall/CRC}, year = {2002}, volume = {}, series = {}, address = {}, edition = {Second}, month = {}, note = {}, key = {} } See \citeasnoun{Miller02} for a book on subset selection in regression.
• YANG, Yiming and Jan O. PEDERSEN, 1997. A comparative study on feature selection in text categorization. Proceedings of the Fourteenth International Conference on …. [Cited by 901] (97.25/year)
• "This paper is a comparative study of feature selection methods in statistical learning of text categorization. The focus is on aggresive dimensionality reduction. Five methods were evaluated, including term selection based on document frequency (DF), information gain (IG), mutual information (MI), a $\chi^2$-test (CHI), and term strength (TS). We found IG and CHI most effective in our experiments.[...]" @inproceedings{YangPedersen97, author = {Yiming Yang and Jan O. Pedersen}, title = {A Comparative Study of Feature Selection in Text Categorization}, booktitle = {ICML '97: Proceedings of the Fourteenth International Conference on Machine Learning}, year = {1997}, editor = {}, pages = {412--420}, organization = {}, publisher = {Morgan Kaufmann Publishers Inc.}, address = {San Francisco, CA, USA}, month = {}, note = {}, key = {}, abstract = {This paper is a comparative study of feature selection methods in statistical learning of text categorization. The focus is on aggressive dimensionality reduction. Five methods were evaluated, including term selection based on document frequency (DF), information gain (IG), mutual information (MI), a $\chi^2$-test (CHI), and term strength (TS). We found IG and CHI most effective in our experiments. Using IG thresholding with a k-nearest neighbor classifier on the Reuters corpus, removal of up to 98\% removal of unique terms actually yielded an improved classification accuracy (measured by average precision). DF thresholding performed similarly. Indeed we found strong correlations between the DF, IG and CHI values of a term. This suggests that DF thresholding, the simplest method with the lowest cost in computation, can be reliably used instead of IG or CHI when the computation of these measures are too expensive. TS compares favorably with the other methods with up to 50\% vocabulary reduction but is not competitive at higher vocabulary reduction levels. In contrast, MI had relatively poor performance due to its bias towards favoring rare terms, and its sensitivity to probability estimation errors.} } In a comparative study of feature selection methods in statistical learning of text categorization (with a focus is on aggresive dimensionality reduction), \citeasnoun{YangPedersen97} evaluated document frequency (DF), information gain (IG), mutual information (MI), a $\chi^2$-test (CHI) and term strength (TS); and found IG and CHI to be the most effective.
• KOHAVI, Ron and George H. JOHN, 1997. Wrappers for Feature Subset Selection, Artificial Intelligence. [Cited by 768] (82.89/year)
• "In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular training set, a feature subset selection method should consider how the algorithm and the training set interact. We explore the relation between optimal feature subset selection and relevance. Our wrapper method searches for an optimal feature subset tailored to a particular algorithm and a domain. We study the strengths and weaknesses of the wrapper approach and show a series of improved designs. We compare the wrapper approach to induction without feature subset selection and to Relief, a filter approach to feature subset selection. Significant improvement in accuracy is achieved for some datasets for the two families of induction algorithms used: decision trees and Naive-Bayes." @article{KohaviJohn97, author = {Ron Kohavi and George H. John}, title = {Wrappers for Feature Subset Selection}, journal = {Artificial Intelligence}, year = {1997}, volume = {97}, number = {1--2}, pages = {273--324}, month = {December}, note = {}, key = {}, abstract = {In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular training set, a feature subset selection method should consider how the algorithm and the training set interact. We explore the relation between optimal feature subset selection and relevance. Our wrapper method searches for an optimal feature subset tailored to a particular algorithm and a domain. We study the strengths and weaknesses of the wrapper approach and show a series of improved designs. We compare the wrapper approach to induction without feature subset selection and to Relief, a filter approach to feature subset selection. Significant improvement in accuracy is achieved for some datasets for the two families of induction algorithms used: decision trees and Naive-Bayes.} } \citeasnoun{KohaviJohn97} introduced wrappers for feature subset selection. Their approach searches for an optimal feature subset tailored to a particular learning algorithm and a particular training set.
• GUYON, I. and A. ELISSEEFF, 2003. An introduction to variable and feature selection, Journal of Machine Learning Research, Volume 3, March, Pages 1157-1182. [Cited by 266] (81.46/year)
• @article{GuyonElisseeff03, author = {Isabelle Guyon and Andr{\'e} Elisseeff}, title = {An Introduction to Variable and Feature Selection}, journal = {Journal of Machine Learning Research}, year = {2003}, volume = {3}, number = {}, pages = {1157--1182}, month = {March}, note = {}, key = {}, abstract = {Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available. These areas include text processing of internet documents, gene expression array analysis, and combinatorial chemistry. The objective of variable selection is three-fold: improving the prediction performance of the predictors, providing faster and more cost-effective predictors, and providing a better understanding of the underlying process that generated the data. The contributions of this special issue cover a wide range of aspects of such problems: providing a better definition of the objective function, feature construction, feature ranking, multivariate feature selection, efficient search methods, and feature validity assessment methods.} } \citeasnoun{GuyonElisseeff03} gave an introduction to variable and feature selection. They recommend using a linear predictor of your choice (e.g. a linear SVM) and select variables in two alternate ways: (1) with a variable ranking method using a correlation coefficient or mutual information; (2) with a nested subset selection method performing forward or backward selection or with multiplicative updates.
• JOHN, George H., R. KOHAVI and Karl PFLEGER, 1994. Irrelevant features and the subset selection problem, Machine Learning: Proceedings of the Eleventh International Conference, edited by William W. Cohen and Haym Hirsh, pages 121-129. [Cited by 669] (54.54/year)
• "We address the problem of finding a subset of features that allows a supervised induction algorithm to induce small high-accuracy concepts. We examine notions of relevance and irrelevance, and show that the definitions used in the machine learning literature do not adequately partition the features into useful categories of relevance. We present definitions for irrelevance and for two degrees of relevance. These definitions improve our understanding of the behavior of previous subset selection algorithms, and help define the subset of features that should be sought. The features selected should depend not only on the features and the target concept, but also on the induction algorithm. We describe a method for feature subset selection using cross-validation that is applicable to any induction algorithm, and discuss experiments conducted with ID3 and C4.5 on artificial and real datasets." @inproceedings{JohnKohaviPfleger94, author = {George H. John and Ron Kohavi and Karl Pfleger}, title = {Irrelevant Features and the Subset Selection Problem}, booktitle = {Machine Learning: Proceedings of the Eleventh International Conference}, year = {1994}, editor = {William W. Cohen and Haym Hirsh}, pages = {121--129}, organization = {}, publisher = {Morgan Kaufmann Publishers}, address = {San Francisco, CA}, month = {}, note = {}, key = {}, abstract = {We address the problem of finding a subset of features that allows a supervised induction algorithm to induce small high-accuracy concepts. We examine notions of relevance and irrelevance, and show that the definitions used in the machine learning literature do not adequately partition the features into useful categories of relevance. We present definitions for irrelevance and for two degrees of relevance. These definitions improve our understanding of the behavior of previous subset selection algorithms, and help define the subset of features that should be sought. The features selected should depend not only on the features and the target concept, but also on the induction algorithm. We describe a method for feature subset selection using cross-validation that is applicable to any induction algorithm, and discuss experiments conducted with ID3 and C4.5 on artificial and real datasets.} } \citeasnoun{JohnKohaviPfleger94} addressed the problem of irrelevant features and the subset selection problem. They presented definitions for irrelevance and for two degrees of relevance (weak and strong). They also state that features selected should depend not only on the features and the target concept, but also on the induction algorithm. Further, they claime that the filter model approach to subset selection should be replaced with the wrapper model.
• BLUM, Avrim L. and Pat LANGLEY, 1997. Selection of Relevant Features and Examples in Machine Learning, Artificial Intelligence, Volume 97, Issues 1-2, December 1997, Pages 245-271. [Cited by 455] (49.11/year)
• @article{BlumLangley97, author = {Avrim L. Blum and Pat Langley}, title = {Selection of Relevant Features and Examples in Machine Learning}, journal = {Artificial Intelligence}, year = {1997}, volume = {97}, number = {1--2}, pages = {245--271}, month = {December}, note = {}, key = {}, abstract = {In this survey, we review work in machine learning on methods for handling data sets containing large amounts of irrelevant information. We focus on two key issues: the problem of selecting relevant features, and the problem of selecting relevant examples. We describe the advances that have been made on these topics in both empirical and theoretical work in machine learning, and we present a general framework that we use to compare different methods. We close with some challenges for future work in this area.} } \citeasnoun{BlumLangley97} focussed on two key issues: the problem of selecting relevant features and the problem of selecting relevant examples.
• JAIN, Anil and Douglas ZONGKER, 1997. Feature Selection: Evaluation, Application, and Small Sample Performance, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 2, pp. 153-158. [Cited by 418] (45.11/year)
• @article{JainZongker97, author = {Anil Jain and Douglas Zongker}, title = {Feature Selection: Evaluation, Application, and Small Sample Performance}, journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence}, year = {1997}, volume = {19}, number = {2}, pages = {153--158}, month = {February}, note = {}, key = {}, abstract = {A large number of algorithms have been proposed for feature subset selection. Our experimental results show that the sequential forward floating selection algorithm, proposed by Pudil et al. (1994), dominates the other algorithms tested. We study the problem of choosing an optimal feature set for land use classification based on SAR satellite images using four different texture models. Pooling features derived from different texture models, followed by a feature selection results in a substantial improvement in the classification accuracy. We also illustrate the dangers of using feature selection in small sample size situations.} } \citeasnoun{JainZongker97} considered various feature subset selection algorithms and found that the sequential forward floating selection algorithm, proposed by \citeasnoun{PudilNovovicovaKittler94}, dominated the other algorithms tested.
• LIU, Huan and Hiroshi MOTODA, 1998. Feature Selection for Knowledge Discovery and Data Mining. Kluwer Academic Publishers Norwell, MA, USA. [Cited by 329] (39.81/year)
• @book{LiuMotoda98, author = {Huan Liu and Hiroshi Motoda}, title = {Feature Selection for Knowledge Discovery and Data Mining}, chapter = {}, pages = {}, publisher = {Kluwer Academic Publishers}, year = {1998}, volume = {}, series = {}, address = {}, edition = {}, month = {}, note = {}, key = {} } \citeasnoun{LiuMotoda98} wrote their book on feature selection which offers an overview of the methods developed since the 1970s and provides a general framework in order to examine these methods and categorize them.
• KOLLER, D. and M. SAHAMI, 1996. Toward optimal feature selection, Proceedings of the Thirteenth International Conference on Machine Learning, Pages 284-292. [Cited by 363] (35.36/year)
• @inproceedings{KollerSahami96, author = {Daphne Koller and Mehran Sahami}, title = {Toward Optimal Feature Selection}, booktitle = {Proceedings of the Thirteenth International Conference on Machine Learning}, year = {1996}, editor = {}, pages = {284--292}, organization = {}, publisher = {Morgan Kaufmann}, address = {}, month = {July}, note = {}, key = {}, abstract = {In this paper, we examine a method for feature subset selection based on Information Theory. Initially, a framework for defining the theoretically optimal, but computationally intractable, method for feature subset selection is presented. We show that our goal should be to eliminate a feature if it gives us little or no additional information beyond that subsumed by the remaining features. In particular, this will be the case for both irrelevant and redundant features. We then give an efficient algorithm for feature selection which computes an approximation to the optimal feature selection criterion. The conditions under which the approximate algorithm is successful are examined. Empirical results are given on a number of data sets, showing that the algorithm effectively handles datasets with a very large number of features.} } \citeasnoun{KollerSahami96} examined a method for feature subset selection based on Information Theory: they presented a theoretically justified model for optimal feature selection based on using cross-entropy to minimize the amount of predictive information lost during feature elimination.
• WESTON, Jason, et al., 2001. Feature selection for SVMs, Advances in Neural Information Processing Systems 13 [Cited by 203] (32.40/year)
• @inproceedings{Weston-etal00, author = {Jason Weston and Sayan Mukherjee and Olivier Chapelle and Massimiliano Pontil and Tomaso Poggio and Vladimir Vapnik}, title = {Feature Selection for {SVMs}}, booktitle = {Advances in Neural Information Processing Systems 13}, year = {2001}, editor = {Todd K. Leen and Thomas G. Dietterich and Volker Tresp}, pages = {668--674}, organization = {}, publisher = {The MIT Press}, address = {Cambride, MA}, month = {April}, note = {}, key = {}, abstract = {We introduce a method of feature selection for Support Vector Machines. The method is based upon finding those features which minimize bounds on the leave-one-out error. This search can be efficiently performed via gradient descent. The resulting algorithms are shown to be superior to some standard feature selection algorithms on both toy data and real-life problems of face recognition, pedestrian detection and analyzing DNA microarray data.}, conclusion = {In this article we have introduced a method to perform feature selection for SVMs. This method is computationally feasible for high dimensional datasets compared to existing wrapper methods, and experiments on a variety of toy and real datasets show superior performance to the filter methods tried. This method, amongst other applications, speeds up SVMs for time critical applications (e.g pedestrian detection), and makes possible feature discovery (e.g gene discovery). Secondly, in simple experiments we showed that SVMs can indeed suffer in high dimensional spaces where many features are irrelevant. Our method provides one way to circumvent this naturally occuring, complex problem.} } \citeasnoun{Weston-etal00} introduced a method of feature selection for SVMs which is based upon finding those features which minimize bounds on the leave-one-out error. The method was shown to be superior to some standard feature selection algorithms on the data sets tested.
• DASH, M. and H. LIU, 1997. Feature selection for classification, Intelligent Data Analysis, Volume 1, Issues 1-4, 1997, Pages 131-156. [Cited by 287] (30.98/year)
• @article{DashLiu97, author = {M. Dash and H. Liu}, title = {Feature Selection for Classification}, journal = {Intelligent Data Analysis}, year = {1997}, volume = {1}, number = {1-4}, pages = {131--156}, month = {}, note = {}, key = {}, abstract = {Feature selection has been the focus of interest for quite some time and much work has been done. With the creation of huge databases and the consequent requirements for good machine learning techniques, new problems arise and novel approaches to feature selection are in demand. This survey is a comprehensive overview of many existing methods from the 1970's to the present. It identifies four steps of a typical feature selection method, and categorizes the different existing methods in terms of generation procedures and evaluation functions, and reveals hitherto unattempted combinations of generation procedures and evaluation functions. Representative methods are chosen from each category for detailed explanation and discussion via example. Benchmark datasets with different characteristics are used for comparative study. The strengths and weaknesses of different methods are explained. Guidelines for applying feature selection methods are given based on data types and domain characteristics. This survey identifies the future research areas in feature selection, introduces newcomers to this field, and paves the way for practitioners who search for suitable methods for solving domain-specific real-world applications.} } \citeasnoun{DashLiu97} gave a survey of feature selection methods for classification.
• PUDIL, P., J. NOVOVI?OVÁ and J. KITTLER, 1994. Floating search methods in feature selection, Pattern Recognition Letters, Volume 15, Issue 11, November 1994, Pages 1119-1125. [Cited by 377] (30.74/year)
• @article{PudilNovovicovaKittler94, author = {P. Pudil and J. Novovi{\v{c}}ov{\'a} and J. Kittler}, title = {Floating Search Methods in Feature Selection}, journal = {Pattern Recognition Letters}, year = {1994}, volume = {15}, number = {11}, pages = {1119--1125}, month = {November}, note = {}, key = {}, abstract = {Sequential search methods characterized by a dynamically changing number of features included or eliminated at each step, henceforth floating'' methods, are presented. They are shown to give very good results and to be computationally more effective than the branch and bound method.} } \citeasnoun{PudilNovovicovaKittler94} presented floating'' search methods in feature selection. These are sequential search methods characterized by a dynamically changing number of features included or eliminated at each step. They were shown to give very good results and to be computationally more effective than the branch and bound method.
• YANG, Jihoon and Vasant HONAVAR, 1998. Feature subset selection using a genetic algorithm, IEEE Intelligent Systems, Volume 13, Issue 2, Pages 44-49. [Cited by 248] (30.01/year)
• @article{YangHonavar98, author = {Jihoon Yang and Vasant Honavar}, title = {Feature Subset Selection Using a Genetic Algorithm}, journal = {IEEE Intelligent Systems}, year = {1998}, volume = {13}, number = {2}, pages = {44--49}, month = {March/April}, note = {}, key = {}, abstract = {Practical pattern-classification and knowledge-discovery problems require the selection of a subset of attributes or features to represent the patterns to be classified. The authors' approach uses a genetic algorithm to select such subsets, achieving multicriteria optimization in terms of generalization accuracy and costs associated with the features.} } \citeasnoun{YangHonavar98} used a genetic algorithm for feature subset selection.
• FORMAN, George, 2003. An extensive empirical study of feature selection metrics for text classification, Journal of Machine Learning Research, Volume 3, March, Pages 1289-1305. [Cited by 75] (22.97/year)
• @article{Forman03, author = {George Forman}, title = {An Extensive Empirical Study of Feature Selection Metrics for Text Classification}, journal = {Journal of Machine Learning Research}, year = {2003}, volume = {3}, number = {}, pages = {1289--1305}, month = {March}, note = {}, key = {}, abstract = {Machine learning for text classification is the cornerstone of document categorization, news filtering, document routing, and personalization. In text domains, effective feature selection is essential to make the learning task efficient and more accurate. This paper presents an empirical comparison of twelve feature selection methods (e.g. Information Gain) evaluated on a benchmark of 229 text classification problem instances that were gathered from Reuters, TREC, OHSUMED, etc. The results are analyzed from multiple goal perspectives---accuracy, F-measure, precision, and recall-since each is appropriate in different situations.\\ The results reveal that a new feature selection metric we call Bi-Normal Separation' (BNS), outperformed the others by a substantial margin in most situations. This margin widened in tasks with high class skew, which is rampant in text classification problems and is particularly challenging for induction algorithms.\\ A new evaluation methodology is offered that focuses on the needs of the data mining practitioner faced with a single dataset who seeks to choose one (or a pair of) metrics that are most \textit{likely} to yield the best performance. From this perspective, BNS was the top single choice for all goals except precision, for which Information Gain yielded the best result most often. This analysis also revealed, for example, that Information Gain and Chi-Squared have correlated failures, and so they work poorly together. When choosing optimal pairs of metrics for each of the four performance goals, BNS is consistently a member of the pair---e.g., for greatest recall, the pair BNS + F1-measure yielded the best performance on the greatest number of tasks by a considerable margin.} } \citeasnoun{Forman03} presented an empirical comparison of twelve feature selection methods. Results revealed the surprising performance of a new feature selection metric, Bi-Normal Separation' (BNS).
• XING, E.P., M.I. JORDAN and R.M. KARP, 2001. Feature selection for high-dimensional genomic microarray data, ICML '01: Proceedings of the Eighteenth International Conference on Machine Learning, pages 601-608. [Cited by 118] (22.41/year)
• @inproceedings{XingJordanKarp01, author = {Eric P. Xing and Michael I. Jordan and Richard M. Karp}, title = {Feature Selection for High-Dimensional Genomic Microarray Data}, booktitle = {ICML '01: Proceedings of the Eighteenth International Conference on Machine Learning}, year = {2001}, editor = {}, pages = {601--608}, organization = {}, publisher = {Morgan Kaufmann}, address = {San Francisco, CA, USA}, month = {}, note = {}, key = {}, abstract = {We report on the successful application of feature selection methods to a classification problem in molecular biology involving only 72 data points in a 7130 dimensional space. Our approach is a hybrid of filter and wrapper approaches to feature selection. We make use of a sequence of simple filters, culminating in Koller and Sahami's (1996) Markov Blanket filter, to decide on particular feature subsets for each subset cardinality. We compare between the resulting subset cardinalities using cross validation. The paper also investigates regularization methods as an alternative to feature selection, showing that feature selection methods are preferable in this problem.} } \citeasnoun{XingJordanKarp01} successfully applied feature selection methods (using a hybrid of filter and wrapper approaches) to a classification problem in molecular biology involving only 72 data points in a 7130 dimensional space. They also investigated regularization methods as an alternative to feature selection, and showed that feature selection methods were preferable in the problem they tackled.
• KIRA, Kenji and L.A. RENDELL, 1992. A practical approach to feature selection. Proceedings of the ninth international workshop on Machine …. [Cited by 294] (20.61/year)
• @inproceedings{KiraRendell92, author = {Kenji Kira and Larry A. Rendell}, title = {A Practical Approach to Feature Selection}, booktitle = {ML92: Proceedings of the Ninth International Conference on Machine Learning}, year = {1992}, editor = {Derek H. Sleeman and Peter Edwards}, pages = {249--256}, organization = {}, location = {Aberdeen, Scotland, United Kingdom}, publisher = {Morgan Kaufmann Publishers Inc.}, address = {San Francisco, CA, USA}, month = {}, note = {}, key = {}, abstract = {} } \citeasnoun{KiraRendell92} described a statistical feature selection algorithm called RELIEF that uses instance based learning to assign a relevance weight to each feature.
• KIRA, K. and L.A. RENDELL, 1992. The feature selection problem: Traditional methods and a new algorithm. AAAI-92: Proceedings of the 10th National Conference on Artifical Intelligence [Cited by 279] (19.54/year)
• @inproceedings{KiraRendell, author = {K. Kira and Rendell}, title = {The feature selection problem: Traditional methods and a new algoirthm}, booktitle = {AAAI-92: Proceedings of the 10th National Conference on Artifical Intelligence}, year = {1992}, editor = {W. Swartout}, pages = {129--134}, organization = {}, publisher = {AAAI Press/The MIT Press}, address = {}, month = {August}, note = {}, key = {}, abstract = {} }
• BATTITI, Roberto,, 1994. Using mutual information for selecting features in supervisedneural net learning. Neural Networks, IEEE Transactions on. [Cited by 224] (18.26/year)
• NG, Hwee Tou, Wei Boon GOH and Kok Leong LOW, 1997. Feature selection, perception learning, and a usability case study for text categorization. Proceedings of the 20th annual international ACM SIGIR …. [Cited by 165] (17.81/year)
• BRADLEY, P.S. and O.L. MANGASARIAN, 1998. Feature selection via concave minimization and support vector machines. Machine Learning Proceedings of the Fifteenth International …. [Cited by 144] (17.42/year)
• MITRA, P., C.A. MURTHY and S.K. PAL, 2002. Unsupervised feature selection using feature similarity. Pattern Analysis and Machine Intelligence, IEEE Transactions …. [Cited by 71] (16.64/year)
• SKALAK, David B., 1994. Prototype and feature selection by sampling and random mutation hill climbing algorithms. Proceedings of the Eleventh International Conference on …. [Cited by 194] (15.82/year)
• KWAK, N. and C.H. CHOI, 2002. Input feature selection for classification problems. Neural Networks, IEEE Transactions on. [Cited by 66] (15.47/year)
• ALMUALLIM, Hussein and Thomas G. DIETTERICH, 1991. Learning with many irrelevant features. Proceedings of the Ninth National Conference on Artificial …. [Cited by 236] (15.46/year)
• B?, T.H. and I. JONASSEN, 2002. New feature subset selection procedures for classification of expression profiles. Genome Biology. [Cited by 65] (15.24/year)
• HALL, M.A., 1999. Correlation-based Feature Selection for Machine Learning. [Cited by 101] (13.90/year)
• HALL, M.A., 2000. Correlation-based feature selection for discrete and numeric class machine learning. Proceedings of the Seventeenth International Conference on …. [Cited by 87] (13.89/year)
• SWINIARSKI, R.W. and A. SKOWRON, 2003. Rough set methods in feature selection and recognition. Pattern Recognition Letters. [Cited by 44] (13.47/year)
• MLADENIC, D. and M. GROBELNIK, 1999. Feature selection for unbalanced class distribution and naive Bayes. Machine Learning: Proceedings of the Sixteenth International …. [Cited by 93] (12.80/year)
• MINGERS, J., 1989. An empirical comparison of selection measures for decision-tree induction. Machine Learning. [Cited by 219] (12.68/year)
• SIEDLECKI, W. and J. SKLANSKY, 1993. On automatic feature selection. Handbook of pattern recognition & computer vision table of …. [Cited by 166] (12.51/year)
• MLADENIC, D., 1998. Feature subset selection in text-learning. Proceedings of the 10th European Conference on Machine …. [Cited by 103] (12.46/year)
• YU, L. and H. LIU, 2003. Feature Selection for High-Dimensional Data: A Fast Correlation-Based Filter Solution. Proceedings of the twentieth International Conference on …. [Cited by 39] (11.94/year)
• LANGLEY, Pat, 1994. Selection of relevant features in machine learning. Proceedings of the AAAI Fall Symposium on Relevance. [Cited by 142] (11.58/year)
• MINGERS, J., 1989. An Empirical Comparison of Pruning Methods for Decision Tree Induction. Machine Learning. [Cited by 200] (11.58/year)
• SIEDLECKI, W. and J. SKLANSKY, 1989. A note on genetic algorithms for large-scale feature selection. Pattern Recognition Letters. [Cited by 200] (11.58/year)
• BRADLEY, P.S., O.L. MANGASARIAN and W.N. STREET, 1998. Feature selection via mathematical programming. INFORMS Journal on Computing. [Cited by 92] (11.13/year)
• LIU, H. and R. SETIONO, 1996. A probabilistic approach to feature selection-a filter solution. Proceedings of the 13 thICML. [Cited by 110] (10.72/year)
• KIM, K. and W.B. LEE, 2004. Stock market prediction using artificial neural networks with optimal feature transformation, Neural Computing & Applications. [Cited by 1] (0.44/year)