Editor’s note: The following is a guest post from Anisha Vaswani, chief information and customer officer at Extreme Networks.
Agentic AI in the enterprise is reaching a tipping point, with tech leaders under pressure to deploy AI fast and let it take over repetitive tasks.
More than half of tech leaders expect to remove humans from the loop within a year or less, with nearly 4 in 5 companies treating AI agents as actual users requiring identity and governance controls.
However, for organizations to benefit from the technology instead of dealing with automated errors, putting guardrails in place for AI is a crucial step. As companies expand AI agents, CIOs know the real challenge will not be deploying AI but managing it safely and effectively.
The case for guardrails
When AI is managed properly, teams can scale it across the enterprise with confidence while protecting critical assets. Left to its own devices, unchecked AI can result in hallucinations, inappropriate outputs and scope creep.
That’s not to say that I’m not a supporter of AI in the enterprise. I’m a big believer in its benefits and potential to transform the tech landscape. But it’s important for CIOs to work strategically amid deployment efforts.
AI is outpacing anything I’ve seen in my 30 years in the tech industry as leaders expect immediate ROI. Fifty-seven percent of tech leaders expect measurable AI impact within weeks, up from just 16% in 2024. Nearly one in 10 want ROI within hours.
Expectations are high, and so is the potential for massive transformation — but only if deployments are done deliberately, with control over and visibility into all AI actions.
Intentional, forward-looking CIOs will focus on AI guardrails just as much as — if not more than — the technology itself to ensure AI systems perform as intended and can run safely at scale.
Implementing guardrails
To achieve AI longevity and ROI, CIOs first need to establish guardrails including observability, monitoring, security, governance and explainability.
Every organization and use case is different, so proper guardrails will vary. Leaders must seek input from groups across the business to ensure all bases are covered.
AI guardrails are most effective when they enjoy executive sponsorship, alignment and collaboration across the company. Everyone from the CEO to the chief data officer should be championing AI governance and meeting regularly to discuss its relevance.
CIOs can take the lead here by creating a cross-functional collaboration forum to bring the relevant stakeholders together, welcoming considerations on what’s going to deliver value, risk assessment, the compliance lens and technology enablement.
In the initial stages of AI deployments, I prefer to stick to a strategy we call “human-in-the-loop.” For example, in the network, AI can make recommendations based on the environment, but a human network engineer still has to approve AI actions.
Because the network is so critical to all parts of the organization, AI can’t run unchecked, but it can take hours of work for an IT employee and distill it down into reviewing and approving a resolution. As teams build trust with AI over time it can run more autonomously, but starting with a human-in-the-loop is an effective and simple guardrail.
Preparing the workforce
Equally as important as placing guardrails around AI is preparing the workforce to use it.
Data analytics, AI/ML expertise and AI in cybersecurity are becoming heavily in-demand skills, necessitating continuous upskilling and organizational change. AI’s rapid pace makes training a persistent concern, with almost half of tech chiefs listing upskilling as one of their top five concerns around AI implementation.
CIOs need to ensure IT teams are AI-ready and capable of deploying and supporting AI solutions, while partnering with HR to give employees across the organization the skills and guidance to use AI responsibly.
Companies need to invest in building AI literacy, ensuring employees are smart consumers of AI tools and platforms. As cybersecurity professionals can attest, no matter how many preventative measures are in place, if a phishing email slips through the cracks, it’s the employee who makes the final decision on whether or not to click.
A savvier and more knowledgeable workforce is a safer one — and that also applies to AI. As many guardrails as you place around it, we can’t anticipate every scenario. If we want to integrate AI to improve efficiency, we must trust our employees to use it correctly.
I’m optimistic we’ll figure AI governance out, especially as we narrow our focus to scaling pilots that deliver real business value rather than trying a various proofs-of-concept and seeing what sticks.
This approach will create buffer time to catch our governance up to the technology. With the right guardrails in place, AI can unlock tremendous value, helping organizations innovate faster, operate more efficiently, and drive measurable outcomes at scale.