‘Accidental jailbreaks’ and ChatGPT’s links to murder, suicide: AI Eye
ChatGPT’s “memory” function might explain how the bot was persuaded to ignore its own safety guardrails in a murder case and a suicide. AI Eye
Two tragic cases linking ChatGPT to a murder and a suicide came to prominence this week, with attention turning to how extended conversations and persistence of memory can build to get around the guardrails OpenAI has attempted to build into its models.
Users appear able to unwittingly jailbreak the LLM, with potentially tragic consequences. OpenAI has promised improved guardrails, but some experts believe the answer may lie in making chatbots behave less like humans and more like computers.
History has been made with the first documented instance of ChatGPT being implicated in a murder.