Floragasse 7 – 5th floor, 1040 Vienna
Subscribe to our Newsletter

News

Rudolf Mayer gives a talk on Adversarial Machine Learning at the Vienna Deep Learning Meetup

As Machine Learning is increasingly integrated in many applications, including safety critical ones such as autonomous cars, robotics, visual authentication and voice control, wrong predictions can have a significant influence on individuals and groups. Advances in prediction accuracy have been impressive, and while machine learning systems still can make rather unexpected mistakes on relatively easy examples, the robustness of algorithms has also steadily increased.

However, many models, and specifically Deep Learning approaches and image analysis, are rather susceptible to adversarial attacks. These attacks are e.g. in the form of small perturbations that remain (almost) imperceptible to human vision, but can cause a neural network classifier to completely change its prediction about an image, with the model reporting a very high confidence on the wrong prediction. A strong form of attack are so-called backdoors, where a specific key is embedded into a data sample, to trigger a pre-defined class prediction in a controlled manner.

This talk will give an overview on various attacks (backdoors, evasion, inversion), and will discuss how they can be mitigated.

Go to the Deep Learning Meetup website for the full schedule.