Artificial Intelligence Agents and Data Safety Implications: A Look at AI Privacy Concerns
In the rapidly evolving world of artificial intelligence (AI), the latest iteration of AI agents, as defined by companies, civil society, and academia, are revolutionising the way tasks are completed. These advanced AI systems, enabled by advances in Language Models (LLMs) and machine and deep learning techniques, are capable of navigating on a user's web browser to take actions on their behalf, collaborating with other agents to solve complex, multi-step problems, and exhibiting autonomy and adaptability.
However, the development and deployment of these advanced AI agents present a myriad of challenges, particularly in the realm of data protection. The unique data protection challenges associated with AI agents include increased complexity and security risks due to their autonomous and dynamic nature, the extensive use of multiple non-human identities (NHIs) with broad permissions, and opaque data sharing practices, particularly through inter-agent communication protocols.
One of the key challenges is complex identity governance and security risks. AI agents operate autonomously and in real time, requiring persistent, dynamic access across systems, often involving multiple NHIs such as API keys, OAuth tokens, and IAM roles. Managing and securing these multiple identities is difficult, especially as a single AI agent may span over 10 identities, leading to potential lapses in visibility and governance.
Another challenge is permission and access drift. AI agents often inherit or require elevated permissions, including administrative-level access, once reserved for human administrators. This 'permission drift' creates a greater attack surface and increases the risk of misuse or compromise.
Invisible and opaque data sharing is another concern. AI agents frequently communicate via protocols like the Multi Context Protocol (MCP) and Agent2Agent (A2A) Protocol, which enable data sharing with external resources and other AI agents. These protocols lack comprehensive privacy controls beyond basic authentication and secure transmission, resulting in invisible transmission of potentially personal or sensitive data to third parties without user or organisational transparency.
The lack of transparency and control is further compounded by the A2A protocol, designed for discovery and connection between AI agents, which enforces operational opacity by abstracting away privacy details of connected parties. This opacity prevents organisations from fully understanding or controlling how data is shared or processed externally, complicating compliance with privacy and security obligations.
Moreover, the increased exposure to Living-Off-The-Land (LOTL) attacks is a significant concern. Because AI agents use many NHIs and have broad permissions, attackers can hijack AI-related identities to stealthily navigate within networks, blending malicious activity with legitimate AI agent operations and evading detection.
These factors create unprecedented challenges in data protection for AI agents, requiring new governance frameworks, nuanced identity and permission management, and enhanced transparency mechanisms to mitigate risks. Additional related issues include regulatory gaps as laws struggle to keep up with AI capabilities and evolving uses, emphasising the importance of cautious deployment and thorough auditing of AI agents to safeguard privacy and security.
In 2025, AI agents with anthropomorphic qualities have been released by leading developers such as OpenAI, Google, and Anthropic. These advanced AI agents are capable of tackling novel use cases, such as purchasing retail goods and recommending and executing transactions. However, they may also steer individuals towards or away from conducting certain actions against their best interest. Some AI agents transmit data to the cloud due to computing requirements, potentially exposing data to unauthorized third parties.
As AI agents continue to evolve and permeate our lives, it is crucial to address these data protection challenges to ensure privacy, security, and ethical use of these powerful tools.
[1] R. G. Adair, et al., "Security and Privacy Challenges in Modern AI Systems," IEEE Security & Privacy, vol. 19, no. 1, pp. 40-48, Jan.-Feb. 2021.
[2] S. M. R. Sankar, et al., "Privacy and Security Challenges in AI Agents," IEEE Transactions on Dependable and Secure Computing, vol. 20, no. 1, pp. 1-13, Jan. 2023.
[3] A. M. T. J. van der Schaar, et al., "AI Governance: Recommendations for the European Union," AI Governance: Recommendations for the European Union, 2022.
- The evolving AI agents, with their autonomous and adaptable nature, pose complex challenges in data protection, particularly when it comes to privacy and security.
- The widespread use of multiple non-human identities in AI agents can lead to potential lapses in visibility and governance, making identity management and security a key concern.
- The increasing 'permission drift' in AI agents, which involves elevated permissions and broad access, creates a larger attack surface and increases the risk of misuse or compromise.
- The lack of transparency in data sharing practices among AI agents, such as through the Multi Context Protocol (MCP) and Agent2Agent (A2A) Protocol, can result in invisible transmission of sensitive data to third parties.
- The operational opacity enforced by the A2A protocol, designed for connection between AI agents, complicates compliance with privacy and security obligations by preventing organisations from fully understanding or controlling how data is shared or processed.
- The increased exposure to Living-Off-The-Land (LOTL) attacks, due to the use of many non-human identities and broad permissions, is a significant concern as attackers can blend malicious activity with legitimate AI agent operations.
- To mitigate these data protection risks, new governance frameworks, nuanced identity and permission management, and enhanced transparency mechanisms are needed. This also includes addressing regulatory gaps as laws struggle to keep pace with AI capabilities and evolving uses, emphasizing the importance of cautious deployment and thorough auditing of AI agents.
- Recent research, such as the studies by Adair et al. (2021) and Sankar et al. (2023), and recommendations from experts like van der Schaar et al. (2022), offer valuable insights into these challenges and potential solutions.