Financial institutions, home automation products, and hi-tech offices have increasingly used voice fingerprinting as a method for authentication. Recent advances in machine learning have shown that text-to-speech systems can generate synthetic, high-quality audio of subjects using audio recordings of their speech. Are current techniques for audio generation enough to spoof voice authentication algorithms? We demonstrate, using freely available machine learning models and limited budget, that standard speaker recognition and voice authentication systems are indeed fooled by targeted text-to-speech attacks. We further show a method which reduces data required to perform such an attack, demonstrating that more people are at risk for voice impersonation than previously thought.
Speakers
Azeem Aqil
Azeem Aqil is a security engineer at Salesforce. He works on building and maintaining the detection and response infrastructure that powers Salesforce security. Azeem is an academic turned hacker who has published and spoken at various academic security conferences.
John Seymour
John Seymour (_delta_zero) performs machine learning on log data by day, and writes his dissertation on malware datasets by night. He was voted "most likely to create Skynet" by @alexcpsec, and he toys with offensive uses for machine learning in his free time. He has spoken at Black Hat USA, DEF CON, SecTor, BSidesLV/Charm, and the NIPS workshop on Machine Deception.
Detailed Presentation:
Comments