IAM 2025: The Rise of the Machines
Identity and access management (IAM), and by extension, identity security, is one of the most pervasive and impactful challenges facing all European organizations today, from an operational and risk management perspective.
The targeting of users and credentials has been well documented through year after year of the major global threat reports, such as Verizon’s Data Breach Investigations Report (DBIR). Phishing attacks continue unabated as threat actors steal more and more credentials. Verizon’s 2025 DBIR report highlights compromised credentials as the most common initial access vector among non-error breaches.
The challenge for many organizations is dealing with the sheer volume, velocity, and variety of IAM-related events. Take a workforce of a few thousand permanent employees that need access to an estate of a few hundred applications. Add in a few hundred temporary workers, partners, and contractors that need access to specific systems and applications on a constrained basis. Then add in a range of entitlement levels for what all of those users — permanent, temporary, and external — can do in each application. Remember also that the workforce is in constant flux, with new joiners, movers, and leavers. To prevent exposure, any changes to the access rights and entitlements of those users must be put into effect immediately when a transition takes place.
The outcome is a volume of IAM events and processes that simply cannot be managed without automation.
And that’s just the humans.
Organizations have become aware that there is an even bigger and faster-growing set of identities that they need to manage as a matter of increasing urgency: the non-humans.
The Rise of Non-Human Identities
Some non-human identity (NHI) types have been around for years, such as service accounts. These are already a concern, since many of them are entitled to execute privileged actions, which typically need a higher level of control to safeguard data and processes. Furthermore, nested privileges enabled by multiple overlapping or intersecting service accounts can obfuscate over-provisioning of access, which can be a major security risk.
Service accounts are just one category of NHIs that merit attention, however. The growing list includes device identities, cloud workloads, bots, APIs, and, increasingly, AI agents. Some of these NHIs are relatively long-lived and fixed, others are fast moving and ephemeral. Visibility into the creation and provisioning of some NHIs can be extremely limited for the identity, IT, and security professionals tasked with managing them. So how should organizations address this growing challenge and contain the risk? Can existing IAM and identity security tools be co-opted to manage the NHI pool?
According to preliminary data from IDC’s EMEA Security Technologies and Strategies Survey, 2025, more than a third of EMEA organizations are already grappling with this challenge. The short answer to the questions above is that existing tools can probably address some of the requirements of NHI IAM and security (but to adequately manage the risk, a dedicated approach is going to be required).
AI Agents: A Complex Challenge
If we take AI agents as an example, these are probably one of the most complex and fastest-growing NHI categories. According to IDC’s March 2025 Future Enterprise Resiliency and Spending (FERS) Survey, 38% of European organizations are already investing in agentic AI, with a further 43% conducting initial testing and proofs of concept. IDC’s 2025 Worldwide Future of Work Predictions report projects that by 2027, agentic AI workflows will impact at least 40% of knowledge work in G2000 organizations.
Functionally, AI agents can act like service accounts in some aspects; at the same time, they share some behaviors with human identities. They can also be a force multiplier for risk. In an ordinary business process, a human user might conduct actions that call a handful of APIs (another at-risk NHI category, since API access is often unsecured). When we enable AI agents to act on our behalf, they may be calling hundreds of APIs, creating a flywheel effect that multiplies the risk.
This brings in a bigger topic of security by design, which is as relevant here as it is in any other sphere of security. As development teams build agentic AI services, it is critical that security is built in from the start. It’s far more complex and costly to add on once agents are live. This means building in seamless and secure authentication requirements before a user or an agent is able to do anything; ensuring secure and vaulted credentials for API tokens; and applying fine-grained and dynamically updated authorization for permissions that an agent needs to complete a task (and nothing more).
From an IAM perspective, these are some of the key building blocks to ensure that AI agents don’t become an NHI risk; however, further controls and guardrails will be needed. For other NHI categories, the requirements may be different, and organizations should conduct risk assessments for each category individually before taking the necessary measures to protect them.
Like all IAM challenges, the NHI issue is not insurmountable. However, organizations should avoid the historic IAM mistakes of siloed approaches and short-term fixes and make sure that appropriate security controls are built in, from the beginning, wherever NHIs are active within their systems. What’s required is a strategic, granular, and risk-based approach that addresses IAM for all NHIs before they become embedded in all our business processes.