Using ChatGPT to research cyber threats has backfired on bad actors, OpenAI revealed in a report analyzing emerging trends in how AI is currently amplifying online security risks.
Not only do ChatGPT prompts expose what platforms bad actors are targeting—and in at least one case enabled OpenAI to link a covert influence campaign on X and Instagram for the first time—but they can also reveal new tools that threat actors are testing to evolve their deceptive activity online, OpenAI claimed.
__TasteslikeCandy on
Who knew that trying to outsmart AI would turn into a game of digital whack-a-mole?
Trilobyte141 on
Yeeeeeeaaah…. this feels like some thinly veiled PR bullshit from OpenAI. “Yes our barely-regulated toy is being used by bad actors to disrupt elections, scam people, and spam social media with bullshit, but it’s also making it easier to identify when that is happening!”Â
Not ‘stopping’ it from happening, mind. Not making it harder. Just letting us know. Thanks, buddy. We totally hadn’t noticed.Â
ETA: I think my favorite part of the article is the bad actor who blatantly asked CharGPT for advice on how to phish its own employees. The bad actor was stopped in their tracks!! … by the employee email spam filter. Wooo, go ChatGPT, you totally saved the day!
3 Comments
Using ChatGPT to research cyber threats has backfired on bad actors, OpenAI revealed in a report analyzing emerging trends in how AI is currently amplifying online security risks.
Not only do ChatGPT prompts expose what platforms bad actors are targeting—and in at least one case enabled OpenAI to link a covert influence campaign on X and Instagram for the first time—but they can also reveal new tools that threat actors are testing to evolve their deceptive activity online, OpenAI claimed.
Who knew that trying to outsmart AI would turn into a game of digital whack-a-mole?
Yeeeeeeaaah…. this feels like some thinly veiled PR bullshit from OpenAI. “Yes our barely-regulated toy is being used by bad actors to disrupt elections, scam people, and spam social media with bullshit, but it’s also making it easier to identify when that is happening!”Â
Not ‘stopping’ it from happening, mind. Not making it harder. Just letting us know. Thanks, buddy. We totally hadn’t noticed.Â
ETA: I think my favorite part of the article is the bad actor who blatantly asked CharGPT for advice on how to phish its own employees. The bad actor was stopped in their tracks!! … by the employee email spam filter. Wooo, go ChatGPT, you totally saved the day!