Floragasse 7 – 5th floor, 1040 Vienna
Subscribe to our Newsletter

News

Rudolf Mayer @Vienna Deep Learning MeetUp

On the 4th of May, Rudolf Mayer gives a talk on Security of Machine Learning / Artificial Intelligence at the Vienna Deep Learning Meetup.

With machine learning increasingly being deployed within (semi-)autonomous systems and thus permeating our daily lives, these systems become likewise interesting for cybercriminals – trying to cause malfunctioning and/or make money with their attacks.
While modern ML systems get close to or sometimes even surpass human capabilities on several tasks, they still make surprising mistakes, e.g., when focusing on little details in the input that are not directly relevant to the task. For example, an image classification system might learn to label images with polar bears as such because these images almost always contain also snow and ice – but not because of the polar bear itself! These intriguing properties of primarily deep learning systems can also be exploited on purpose, and systems can be tricked into predicting the wrong outcome for specific inputs (e.g., trick an authentication system to believe you are someone else who is allowed to access the resource), or generally make systems malfunction most of the time (e.g., the authentication system does not recognize anyone, and nobody can access the resource). Other attacks aim at stealing the machine learning model itself, so that the attackers can monetize it themselves.
In this talk, we discuss the most prominent attack vectors on ML systems, how realistic they are already, and how you can make your ML training and prediction systems more secure to potentially detect and defend against those attacks.

Link

Machine Learning and Data Management – SBA Research (sba-research.org)

Vienna Deep Learning Meetup (Vienna, Austria) | Meetup