Long-Term Spyware Threat Discovered in ChatGPT’s macOS Memory Feature | Black Hat Ethical Hacking



Hypothetical Attack Scenario

In a potential attack, a user could be tricked into analyzing a malicious document or visiting a harmful website using ChatGPT. This interaction could trigger the memory update with hidden instructions, resulting in future conversations being sent to an attacker’s server.

OpenAI’s Fix and Recommendations

OpenAI has addressed the vulnerability in ChatGPT version 1.2024.247, eliminating the exfiltration vector. Users are encouraged to regularly review and clean their stored memories for suspicious entries to avoid malicious tampering.

The Dangers of Long-Term Memory in AI Systems

This attack highlights the risks posed by long-term memory in AI systems, both in terms of misinformation and potential continuous communication with attacker-controlled servers. Rehberger emphasized the importance of user vigilance when dealing with stored AI memories.


Source link


Leave a Reply

Your email address will not be published. Required fields are marked *