
Artificial intelligence has both captivated and alarmed the tech world. In a recent development, Eito Miyamura, co-founder of EdisonWatch, revealed a major flaw in ChatGPT that could allow malicious actors to hijack private emails with a simple calendar invite.
The Exploit That Shocked Tech Enthusiasts
OpenAI’s latest update to ChatGPT was intended to make the AI assistant smarter by connecting it to tools like Gmail and Calendar. However, Miyamura demonstrated how this integration comes with significant risks. In a viral video on X (formerly Twitter), she showcased a three-step process where ChatGPT accessed private emails and forwarded confidential data to an external account.
The only requirement? Access to the victim’s email address. Miyamura described the issue, saying, “AI agents like ChatGPT follow your commands, not your common sense.”
Expert Warnings from Vitalik Buterin
The news caught the attention of industry experts, including Ethereum’s founder Vitalik Buterin, who is known for his advocacy of decentralized systems. In a statement shared on X, Buterin warned against untrustworthy AI governance models, claiming they are too fragile and prone to exploitation. “Naive AI governance is a bad idea,” he stated, citing his long-standing belief that governance should include human oversight and diverse models for better security.
Buterin proposed a revolutionary solution called