Abstract
Selection of most informative features that leads to a small loss on future data are arguably one of the most important steps in classification, data analysis and model selection. Several feature selection (FS) algorithms are available; however, due to noise present in any data set, FS algorithms are typically accompanied by an appropriate cross-validation scheme. In this brief, we propose a statistical hypothesis test derived from the Neyman-Pearson lemma for determining if a feature is statistically relevant. The proposed approach can be applied as a wrapper to any FS algorithm, regardless of the FS criteria used by that algorithm, to determine whether a feature belongs in the relevant set. Perhaps more importantly, this procedure efficiently determines the number of relevant features given an initial starting point. We provide freely available software implementations of the proposed methodology.
Original language | English (US) |
---|---|
Article number | 6823119 |
Pages (from-to) | 880-886 |
Number of pages | 7 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 26 |
Issue number | 4 |
DOIs | |
State | Published - Apr 1 2015 |
Keywords
- Feature selection (FS)
- Neyman-Pearson
ASJC Scopus subject areas
- Software
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence