Artificial intelligence-powered web browsers promising to complete tasks autonomously on users' behalf are attracting warnings from cybersecurity specialists, who caution that the technology introduces significant privacy risks despite its convenience.
OpenAI's ChatGPT and Perplexity's AI browser represent a new category of browsers employing AI agents that navigate websites, fill forms and execute commands using natural language instructions. To function effectively, these applications request extensive permissions including access to email, calendars and contact lists.
However, security researchers have identified a fundamental vulnerability: "prompt injection attacks," in which malicious actors embed hidden instructions within webpages that can trick AI agents into exposing user data or performing unintended actions such as unauthorised purchases or social media posts.
Privacy-focused browser company Brave published research this week characterizing indirect prompt injection attacks as a "systemic challenge facing the entire category of AI-powered browsers"—a problem affecting the industry broadly rather than isolated products.
"There's a huge opportunity here in terms of making life easier for users, but the browser is now doing things on your behalf," explained Shivan Sahib, a senior privacy engineer at Brave. "That is just fundamentally dangerous, and kind of a new line when it comes to browser security."
OpenAI's chief information security officer, Dane Stuckey, acknowledged the challenges publicly, describing prompt injection as an "unsolved security problem" that adversaries will invest considerable resources attempting to exploit. Perplexity's security team similarly noted the severity demands "rethinking security from the ground up", warning that such attacks "manipulate the AI's decision-making process itself, turning the agent's capabilities against its user."
Both companies have implemented protective measures. OpenAI created a "logged out mode" preventing agents from accessing user accounts while" style="background-color: rgba(34, 197, 94, 0.3); padding: 2px 0; border-radius: 2px; cursor: pointer; transition: filter 0.2s;" onmouseover="this.style.filter='brightness(0.85)'" onmouseout="this.style.filter='brightness(1)'">while" style="background-color: rgba(34, 197, 94, 0.3); padding: 2px 0; border-radius: 2px; cursor: pointer; transition: filter 0.2s;" onmouseover="this.style.filter='brightness(0.85)'" onmouseout="this.style.filter='brightness(1)'">while browsing—limiting functionality but reducing exposure. Perplexity claims to have built real-time detection systems identifying injection attacks as they occur.
Security specialists acknowledge these efforts whilst cautioning they don't eliminate vulnerabilities entirely. Steve Grobman, chief technology officer at online security firm McAfee, attributes the problem to large language models struggling to distinguish between legitimate instructions and external data they're processing.
"It's a cat and mouse game," Grobman stated. "There's a constant evolution of how the prompt injection attacks work, and you'll also see a constant evolution of defence and mitigation techniques."
Attack methods have already grown sophisticated, progressing from simple hidden text commands to techniques embedding malicious instructions within image data that AI systems process.
Testing by technology publication TechCrunch found that whilst AI browser agents prove moderately useful for straightforward tasks, they often struggle with complexity and can be time-consuming—feeling more like "a neat party trick than a meaningful productivity booster."
Rachel Tobac, chief executive of security training firm SocialProof Security, recommends users treat AI browser credentials as high-value targets requiring unique passwords and multi-factor authentication. She also suggests limiting what early versions of these tools can access, keeping them separate from sensitive accounts related to banking, healthcare and personal information.
"Security around these tools will likely improve as they mature," Tobac noted, recommending caution before granting broad permissions.
The concerns emerge as AI agents represent one of the technology industry's most anticipated developments, with companies racing to demonstrate practical applications that justify substantial investment in the field. Browser automation has long been considered a promising use case, potentially saving users time on repetitive online tasks.
Yet the security challenges illustrate tensions between convenience and protection that characterise much consumer technology adoption. As with earlier innovations—from cloud storage to smart home devices—users face decisions about how much access to grant in exchange for functionality.
The trajectory of AI browser adoption may depend significantly on whether companies can adequately address security vulnerabilities before high-profile breaches erode consumer confidence in the technology.


