news

OpenAI Bans Over 20 Unfair Practices in 2024


OpenAI, the company behind the famous generative AI ChatGPT, released a report revealing the banning of over twenty dishonest activities and networks worldwide during the current year. These operations varied in objectives and sizes, used to create malware, write fake accounts and profiles, as well as articles for websites.

OpenAI confirms that it has analyzed the activities that have been stopped and provided key insights from this analysis, stating in the report that “threat actors are continuing to evolve and experiment with the use of models, but we have not seen any evidence that this leads to meaningful breaches in their ability to produce new malicious software or build viral audiences.”

This is particularly important this year, as it is an election year in many countries, including the United States, Rwanda, India, and the European Union. In early July, OpenAI banned a number of accounts that created comments about the elections in Rwanda, which were published by multiple accounts on platform X (formerly Twitter). Therefore, it is encouraging that OpenAI points out that threatened actors have not made significant progress in their campaigns.

Among other victories achieved by OpenAI was the disruption of a China-based threat actor known as “SweetSpecter,” who attempted phishing attacks via email to target the personal addresses of company employees.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
error: Content is protected !!