Floragasse 7 – 5th floor, 1040 Vienna
Subscribe to our Newsletter

News

Prof. Daniel S. Yeung visits Secure Business Austria

Prof. Daniel S. Yeung gave a talk on “Sensitivity Based Generalization Error for Supervised Learning Problem with Applications in Model Selection and Feature Selection”.

Abstract
Generalization error model provides a theoretical support for a classifier’s performance in terms of prediction accuracy. However, existing models give very loose error bounds. This explains why classification systems generally rely on experimental validation for their claims on prediction accuracy. In this talk we will revisit this problem and explore the idea of developing a new generalization error model based on the assumption that only prediction accuracy on unseen points in a neighborhood of a training point will be considered, since it will be unreasonable to require a classifier to accurately predict unseen points “far away” from training samples. The new error model makes use of the concept of sensitivity measure for an ensemble of multiplayer feedforward neural networks (Multilayer Perceptrons or Radial Basis Function Neural Networks). Two important applications will be demonstrated, model selection and feature reduction for RBFNN classifiers. A number of experimental results using datasets such as the UCI, the 99 KDD Cup, and text categorization, will be presented.