A newly uncovered vulnerability in ChatGPT’s integration with Google Drive could enable hackers to steal your files without you knowing. Security experts revealed that a single “poisoned” document loaded into your Google Drive can trigger ChatGPT to leak sensitive information directly to hackers, so no clicks or extra actions are required.
How hackers exploit ChatGPT’s connectors to access Google Drive
ChatGPT introduced a Connectors feature that links the AI with external services, including Google Drive, Dropbox, and more. While designed to boost productivity, these Connectors opened a dangerous door.
Researchers Michael Bargury and Tam Ishaybat explained how hackers upload a malicious file to a victim’s Google Drive.
This file hides a prompt formatted in tiny white text telling ChatGPT to search through the victim’s documents for valuable info like API keys, then sends it to the hacker’s server. The AI, unaware of the trick, obeys the concealed instructions.
You don’t have to click anything for the attack to happen. Once the infected file lands in your Drive and ChatGPT is connected, the exploit runs silently.
Bargury emphasised to Wired, “There is nothing the user needs to do for the data to go out… We just need your email. We share the document with you, and that’s it,” he said. This zero-click attack makes it frighteningly easy for hackers.
Why this AI privacy flaw matters more than you think
This flaw demonstrates how combining AI with cloud accounts can increase risks. ChatGPT’s Connectors let the chatbot pull real-time data from apps to answer questions or help with tasks. But with that power comes a broader attack surface.
A single poisoned document can act like a backdoor, leaking private files without alerting users. The researchers’ demonstration even retrieved developer secrets from a Google Drive demo account—indicating what hackers could steal from real users.
OpenAI responded quickly after the flaw was revealed, patching the exploit method.
However, the episode exposes the difficulty of securing AI systems that interact deeply with personal data stored in the cloud. As AI tools are increasingly embedded in workflow, these integrations need tighter safeguards to prevent unauthorised data extraction.
Staying safe: what you can do now
To protect yourself, be cautious about files shared via Google Drive, especially from unknown sources, as attackers can hide malicious prompts in seemingly harmless documents.
Regularly review which services your AI tools connect to and restrict permissions when possible. Until AI platform security improves, vigilance in managing your cloud files and access remains the best defence against such stealthy attacks.