Interview: Oct. 5, 2022. Short interview about AI Security, for Cybersecurity Magazine, during the ETSI Cybersecurity Conference 2022.

Talk: Jul. 19, 2022. 2022 ICML Test of Time Award for the paper: Poisoning Attacks against Support Vector Machines (ICML 2012). The talk, titled “Poisoning Attacks against SVMs: Ten Years After”, is available here.

This was the first paper proposing gradient-based attacks against machine learning, followed one year later by our ECML PKDD 2013 paper on evasion attacks. The same idea of using gradient-based attacks on machine learning was independently rediscovered one year later to demonstrate the existence of adversarial examples against deep neural networks.


Talk: May 20, 2021. Lecture on “Trustworthy AI: Poisoning Attacks on AI” - AI for Good Trustworthy AI Seminar Series.


Talk: June 10, 2020. Invited speaker at CASA Distinguished Lecture Series. The video of my lecture “Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning” is available below.


Talk: Oct. 25, 2019. Invited speaker at Avast’s conference on CyberSec & AI, held in Prague, Czech Republic. The video of my lecture on Machine Learning Security: Adversarial Attacks and Defenses is available below.


Talk: Sept. 14, 2018. Invited speaker at the IBM workshop Nemesis ‘18, co-located with ECML-PKDD 2018 in Dublin.


Talk: Nov. 15-16, 2018. Invited speaker at the “Winter School on Quantitative Systems Biology: Learning and AI”, held in Trieste, Italy. The video of the first part of this lecture on Adversarial Machine Learning is available below (slides can be downloaded from the website of the school).


Tutorial: Our ICCV 2017 Tutorial on Adversarial Pattern Recognition and Machine Learning is available on YouTube. The associated review article “Wild Patterns: Ten Years after the Rise of Adversarial Machine Learning” is on ArXiv. The tutorial webpage contains also slides from the follow-up editions at IJCAI-ECAI ‘18, EUSIPCO ‘18, ECCV ‘18, ACM CCS ‘18.