news

Meta Creates New Tool for Detecting AI-Generated Voice

Artificial intelligence is revolutionizing the world of technology, but it also presents serious challenges. One of the recent concerns is the use of AI to create fake voices that can be used in scams and deception campaigns. To address this issue, Meta has developed an innovative tool called AudioSeal.

AudioSeal is a tool that allows you to embed watermarks in audio clips created by artificial intelligence. These watermarks are hidden signals that can be detected but are imperceptible to the human ear. The underlying technology behind AudioSeal has the ability to identify parts of the audio file created by AI, which is crucial in a world where deepfake audio manipulation is on the rise.

AudioSeal utilizes two neural networks. One is responsible for creating the embedded watermark signals in the audio recordings. The other neural network is dedicated to quickly detecting these signals. This enables tracking of the watermark, meaning it can be identified even if the audio is cut or edited.

One of the major benefits of AudioSeal is its high detection accuracy. According to Hadi Al Sahar, a researcher at Meta, the system has achieved accuracy ranging from 90% to 100% in its tests. This represents a significant improvement compared to previous attempts to watermark AI-generated audio.

Additionally, AudioSeal is available for free on GitHub, meaning anyone can download it and start using it to add watermarks to AI-generated audio clips.

AudioSeal represents a promising development in combating misleading information and fake audio scams. However, for this tool to be effective on a broader scale, overcoming technical challenges and industry standards adoption is necessary. The technology community will need to collaborate to enhance the strength of these watermarks and their applicability.

The Meta team will present their work at the International Conference on Machine Learning in Vienna, Austria, in July. This could be a significant step towards broader adoption and the establishment of standards for detecting AI-generated audio.

You can access the study on arxiv.org

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
error: Content is protected !!