Let's fool neural networks
Neural networks are commonly used machine learning models that have achieved state of the art performance on a variety of tasks, e.g., image or speech recognition. Although they seem to be a powerful tool, neural networks are extremely susceptible to a specific kind of attack. In my presentation, I would like to present to an audience what are so-called adversarial examples which lead neural networks to misclassification. How to create such examples. And finally, I would like to show how to defend our model against those kinds of attacks.
Piotr has over 4 years of commercial experience writing Python applications. He is a software developer and researcher at Codete since 2017. Piotr is PhD student at AGH University of Science Technology. His main field of interest is Neural Networks and their practical applications.