Titelaufnahme
Titelaufnahme
- TitelInterpretation of Linear Classifiers by Means of Feature Relevance Bounds
- Verfasser
- Enthalten inNeurocomputing, Jg. 298, S. 69-79
- Erschienen
- SpracheEnglisch
- DokumenttypAufsatz in einer Zeitschrift
- Schlagwörter
- URN
- DOI
Zugriffsbeschränkung
- Das Dokument ist frei verfügbar
Links
- Social MediaShare
- NachweisKein Nachweis verfügbar
- IIIF
Dateien
Klassifikation
Abstract
Research on feature relevance and feature selection problems goes back several decades, but the importance of these areas continues to grow as more and more data becomes available, and machine learning methods are used to gain insight and interpret, rather than solely to solve classification or regression problems. Despite the fact that feature relevance is often discussed, it is frequently poorly defined, and the feature selection problems studied are subtly different. Furthermore, the problem of finding all features relevant for a classification problem has only recently started to gain traction, despite its importance for interpretability and integrating expert knowledge. In this paper, we attempt to unify commonly used concepts and to give an overview of the main questions and results. We formalize two interpretations of the all-relevant problem and propose a polynomial method to approximate one of them for the important hypothesis class of linear classifiers, which also enables a distinction between strongly and weakly relevant features.
Inhalt
Statistik
- Das PDF-Dokument wurde 7 mal heruntergeladen.