Abstract
Dimensionality reduction is an important problem for efficient handling of large databases. Many feature selection methods exist for supervised data having class information. Little work has been done for dimensionality reduction of unsupervised data in which class information is not available. Principal Component Analysis (PCA) is often used. However, PCA creates new features. It is difficult to obtain intuitive understanding of the data using the new features only. In this paper we are concerned with the problem of determining and choosing the important original features for unsupervised data. Our method is based on the observation that removing an irrelevant feature from the feature set may not change the underlying concept of the data, but not so otherwise. We propose an entropy measure for ranking features, and conduct extensive experiments to show that our method is able to find the important features. Also it compares well with a similar feature ranking method (Relief) that requires class information unlike our method.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the International Conference on Tools with Artificial Intelligence |
Editors | Anon |
Publisher | IEEE |
Pages | 532-539 |
Number of pages | 8 |
State | Published - 1997 |
Externally published | Yes |
Event | Proceedings if the 1997 IEEE 9th IEEE International Conference on Tools with Artificial Intelligence - Newport Beach, CA, USA Duration: Nov 3 1997 → Nov 8 1997 |
Other
Other | Proceedings if the 1997 IEEE 9th IEEE International Conference on Tools with Artificial Intelligence |
---|---|
City | Newport Beach, CA, USA |
Period | 11/3/97 → 11/8/97 |
ASJC Scopus subject areas
- Software