Loading…
bsidesaustin2019 has ended
Thursday, March 28 • 5:00pm - 6:00pm
Fooling Machine Learning using Adversarial Examples

Sign up or log in to save this to your schedule and see who's attending!

Feedback form is now closed.
Adversarial example images appear to be of one class (e.g. dog or car), but are classified by machine learning image recognition systems as a class of the attacker's choosing. This talk covers a conceptual introduction to image recognition via convolutional neural networks, creating adversarial examples, and how the speaker adapted such an attack as a problem in picoCTF 2018, an introductory level capture the flag. The talk concludes with an overview of the current state of adversarial example generation in academia, including the current capabilities of defenses, and how attacks have been adapted for the real world. Prior conceptual knowledge of neural networks is not required.

Speakers
avatar for William Parks

William Parks

Bill is an avid CTFer, having contributed to picoCTF 2017/2018 and plaidCTF 2017. He currently ctfs with Shell Collecting Club, and dabbles in ML in his free time.



Thursday March 28, 2019 5:00pm - 6:00pm
Stadium

Attendees (13)