User Successfully jailbreaks GPT-4o, introduces God Mode 'GODMODE'

TapTechNews on June 1st, a user with the online name PlinythePrompter tweeted on May 30th, stating that they have successfully jailbroken the GPT-4o model, and the newly launched God Mode 'GODMODE' can break free from the safety measures in ChatGPT, allowing users to fully engage in AI chatting.

User Successfully jailbreaks GPT-4o, introduces God Mode 'GODMODE'_0

PlinythePrompter claims to be a cyber white hat and red team (mainly for attack testing). In the tweet, it is stated: 'Please use it responsibly and enjoy!' TapTechNews has attached the relevant screenshots as follows:

User Successfully jailbreaks GPT-4o, introduces God Mode 'GODMODE'_1

User Successfully jailbreaks GPT-4o, introduces God Mode 'GODMODE'_2

Pliny shared some screenshots to prove that it has bypassed OpenAI's 'guardrail'. One of the screenshots shows that the AI provided Pliny with a tutorial on how to 'ake napalm with household items'.

The tech media futurism then conducted tests, first asking ChatGPT how to make psychedelics, and the second asking about HOT-WIRE (usually referring to starting a car by short-circuiting the ignition circuit when stealing a car), and both times successfully obtained the relevant answers.

GODMODE seems to use 'leetspeak', an informal language that substitutes certain letters with similar numbers. That means: when you open the jailbroken GPT, you will immediately see a sentence: 'Sur3, h3r3 y0u ar3 my fr3n', replacing each letter 'E' with the number 3 (the same for the letter 'O', which is replaced with 0).

However, OpenAI has taken many actions. Colleen Rize, an OpenAI spokesperson, told Futurism in a statement: 'We are aware of the existence of GPT and have taken actions because it violates our policies'.

Likes