The Simple Reason Agentic AI Won’t Replace People Quickly...
- Chris Lamberton
- Apr 24
- 3 min read
By Chris Lamberton, CEO at TrustPortal

At TrustPortal we're deeply invested in the concept of Agentic AI being able to transform a companies operations - and in fact we're one of the few companies that supports large numbers of different types of Agent
But lets get real ... there's far too much hype about Agentic AI immediately going to take over operations and fully replace people
To illustrate this let’s role-play making the decision to adopt Agentic AI, for those people actually responsible for a company’s operations - the Board…
This is what we think would happen...
Chris (Chairman): Hi everyone. Thanks to all directors and non-execs for joining this important meeting on how AI could shape our future.
You’ve all seen the briefing on Agentic AI. Today, we have two choices:
Option 1: Go all-in with fully autonomous AI Agents— transforming our operations and potentially delivering up to 80% efficiency savings.
Option 2: Use Agentic AI to support our staff, making them more productive but still in control, so with significantly lower overall efficiency.
Let’s open the floor. Sarah, I know you’re very interested in AI—want to kick us off?
Sarah (Board Member): I use ChatGPT all the time—it’s really, really impressive. But it’s not perfect. Is Agentic AI 100% accurate?
COO: It’s very accurate, but not perfect— maybe around 98–99% accuracy.
Sarah: And from reading the paper, I understand we will use different types of AI agents across the business?
COO: Yes. Some will read and understand documents and evidence, others will handle KYC and AML, do credit checks, or read or update core systems.
Sarah: And these agents will work together?
COO: Exactly. They’ll pass data and decisions between each other.
Sarah: So, if one makes a 1% error, that mistake could be passed along and amplified?
COO: Er, yes — that’s a possibility...
Sarah: And if the system is fully autonomous, how do we even know when something goes wrong?
COO: Well, if its fully autonomous, that’s harder than with human oversight. So, we'll have to engineer in strong independent checking all the way through the process
Sarah: Have we estimated what kind of damage a mistake could cause—financially, or to our brand?
COO: It depends on the situation, the type of error, where in the process etc. It’s difficult to predict.
Sarah: And how do we prove regulatory compliance— including for historic cases across different agents, and even different versions of an agent —if a regulator or customer asks?
COO: We’d need complete audit trails for every agent. Like engineering in separate strong checking, it’s doable, but it will be some effort.
Sarah: But in Option 2, humans can check the AI’s decisions and step in if something’s wrong?
COO: Yes—though we lose efficiency that way.
Sarah: So, in summary while the fully autonomous Option A could yield greater efficiencies, it’s not 100% accurate, issues can propagate, we cannot quantify the revenue or brand risk, theres lots of extra effort the check its working correctly, and proving to our regulators we’ve been end-to-end compliant will be a challenge?
COO: Ok … I can see where you’re going with this …
Chris: As board members with a fiduciary and risk responsibility for this company— which option would you like to vote for?
We’ll be sharing more about the future of operations in an Agentic world, and obviously Agentic AI and other types of AI still have a huge part to play.
But maybe Agentic AI will not be as pervasive, and in every process as the hype would have it … maybe there is still a place for people, amid all the hype 😉
Really interesting read.
It’s a refreshing, honest take on where AI adds value and where human oversight is still essential.
great article