AI Agents Are Forcing the Next Enterprise Operating Model Shift
- Quantum Quirks
- 12 minutes ago
- 10 min read

AI agents are moving from pilots to enterprise workflows. Learn how spending, job redesign, maturity, governance, and the Four P’s will shape the next phase of AI adoption.

AI agents are no longer just a productivity experiment. They are becoming a test of whether companies can redesign work, knowledge, pricing, delivery, and talent fast enough to keep up with the technology. The organizations that win will not be the ones that simply give employees access to more tools. They will be the ones that turn AI into a managed operating system for how work gets done.
The numbers tell a striking story. Only 1% of companies in the provided notes report having AI agents today, but planned spending is expected to rise 58% over the next 12 months. Over two years, enterprise expansion is expected to reach 67%, with another 36% anticipating growth after evaluation and organizational rewiring. That gap between current adoption and expected expansion is the real story: AI agents are still early, but budget owners are already preparing for scale.

Public market signals point in the same direction. Cloudera reported that 57% of enterprise IT leaders had implemented AI agents in the prior two years and that 96% planned to expand their use of AI agents in the next 12 months. Deloitte also found that worker access to AI rose by 50% in 2025 and that the number of companies with at least 40% of AI projects in production is expected to double within six months.
The shift is bigger than automation
Most companies still talk about AI as if it is a tool. That framing is too small. AI agents are not just helping employees write faster emails, summarize meetings, or draft reports. They are beginning to sit inside workflows, trigger actions, coordinate systems, and recommend decisions.
That changes the enterprise question from “How do employees use AI?” to “Which parts of the operating model should AI own, assist, or route?” The answer affects every layer of the business: people, process, platform, price, compliance, talent development, and client delivery. In two years, asking how to use AI may feel like asking how to use a conference: it will be assumed, ambient, and built into every conversation.
Deloitte’s 2026 AI report shows why this matters. Organizations are already reporting AI benefits in productivity and efficiency, insight and decision-making, cost reduction, client relationships, product innovation, and revenue growth. But Deloitte also found that only 34% of organizations are truly reimagining the business with AI, while 37% are using AI at a surface level with little or no process change.
That is the difference between tool adoption and operating-model transformation.
The workforce risk is real, but the deeper issue is role design
The provided notes point to a hard workforce reality: 52% of global leaders expect to reduce roles in the next two to three years. The notes also flag a near-term risk for 4 million people, potential exposure across more than 90 million employees in Global 2000 companies, and 23 million people facing structural risk as AI spreads.

This does not mean every exposed role disappears. It means the bundle of tasks inside many roles will be decomposed, reassigned, automated, or elevated. A customer support role may move from answering routine tickets to supervising escalation quality. A proposal manager may move from drafting from scratch to orchestrating AI-generated first drafts, compliance checks, pricing logic, and stakeholder review. A junior analyst may spend less time assembling decks and more time testing assumptions, validating sources, and explaining trade-offs.
The World Economic Forum expects labor market disruption to affect 22% of jobs by 2030, with 170 million new roles created and 92 million displaced, resulting in a net gain of 78 million jobs. The same WEF-linked reporting notes that 41% of employers plan to reduce workforce size where AI can automate tasks, while 77% plan to upskill workers to collaborate more effectively with AI (CNBC).
The most exposed companies are not necessarily the ones with the most AI. They are the ones with the least redesigned work. If experience is trapped in email chains, knowledge is tribal, and compliance approvals happen through slow manual loops, then AI will not simply create efficiency. It will reveal how fragile the operating model already is.
Tribal knowledge is becoming an enterprise liability
Many organizations have a hidden knowledge problem. Their best work lives in old email threads, Slack messages, personal notebooks, client call memories, shared-drive folders, and the heads of a few experienced employees. That may have been inefficient before AI. In an agentic environment, it becomes a scaling blocker.
AI agents need clean context, permissions, data boundaries, workflow rules, and feedback loops. If the enterprise cannot explain how work should be done, where knowledge lives, who owns decisions, and what risks require escalation, then agents will either be underused or unsafe. The result is the familiar pattern: employees experiment on personal LLM accounts while the enterprise forms committees.
That shadow-AI gap is especially risky. Employees who already know how to augment themselves with AI will move faster than the organization’s official governance model. The best employees may already be AI-augmented, but the enterprise may still be debating tool access, policy language, and risk ownership. Security protocols matter, but security cannot become compliance paralysis.
McKinsey's 2026 AI trust research found that security and risk concerns are the top barrier to scaling agentic AI, cited by nearly two-thirds of respondents (McKinsey). McKinsey also found that inaccuracy and cybersecurity are viewed as highly relevant risks by 74% and 72% of respondents, respectively.
The answer is not to slow everything down. The answer is to build controlled acceleration: approved tools, defined data access, audit trails, model governance, clear ownership, and practical training.
Mature AI companies are already pulling away
The maturity gap is becoming visible. According to the provided notes, high AI-maturity companies make decisions 82% faster, and 88% of mature adopters see double-digit customer experience improvements. Mature organizations are also more likely to connect AI initiatives to revenue impact. Only 3% of mature organizations lack ownership, compared with 25% of immature organizations.

That ownership point may be the most important statistic. AI maturity is not defined by how many tools a company buys. It is defined by whether someone owns the business outcome, the workflow change, the risk controls, the data foundation, and the adoption model.
McKinsey found that organizations with explicit responsible-AI ownership, such as AI-specific governance roles, internal audit, or ethics teams, have a higher average maturity score than organizations with no clearly accountable function. Deloitte similarly found that enterprises where senior leadership actively shapes AI governance achieve significantly greater business value than those that delegate governance to technical teams alone.
This is why low AI maturity is so dangerous. Immature organizations often confuse activity with progress. They count pilots, demos, licenses, and training sessions. Mature organizations measure cycle time, error rates, customer experience, revenue impact, adoption quality, risk reduction, and decision velocity.
The four phases of AI integration
Enterprise AI integration does not happen in one leap. It usually moves through four phases, each with a different governance model and level of autonomy.

Phase 1: The team approves every output
In the first phase, AI is mostly a drafting and support layer. A person asks for help, reviews the response, edits it, and decides whether to use it. This is the safest and most familiar mode, but it does not transform the operating model. It makes existing work faster without changing the shape of work.
This phase is useful for building confidence. Teams can use AI for research summaries, email drafts, call recaps, meeting notes, first-pass analysis, and content outlines. The goal should be to capture patterns: which tasks repeat, which outputs require heavy editing, which risks appear often, and which workflows are ready for more structured automation.
Phase 2: AI executes inside a “gardening box”
In the second phase, AI executes tasks inside a defined boundary. The “gardening box” matters because it gives teams room to automate without losing control. The agent can operate within approved data, approved actions, approved systems, and approved escalation rules.
Examples include triaging support tickets, drafting RFP sections from an approved knowledge base, preparing account briefings, routing compliance questions, checking contract language against policy, or generating standard operating procedure updates. Humans still supervise the workflow, but they are no longer approving every word or click.
Phase 3: AI becomes the default surface
In the third phase, employees no longer go first to a dashboard, inbox, CRM, or document repository. They go to an AI surface that summarizes, recommends, routes, and initiates work. People review by exception rather than approving every output.
This is where the operating model begins to change. The AI layer becomes the place where work is understood, prioritized, and coordinated. Every conversation has a third voice: summarizing what happened, suggesting the next action, checking policy, and identifying missing context.
Phase 4: AI becomes a player-coach
In the fourth phase, AI becomes more than a task executor. It becomes a player-coach that helps teams improve performance. It can identify bottlenecks, coach employees, recommend process changes, evaluate quality, surface risks, and adapt workflows based on outcomes.
This phase requires the strongest governance. It also offers the highest upside. When AI can help teams learn, not just execute, services start to behave more like software: measurable, repeatable, adaptive, and continuously improved.
Services are becoming software
One of the biggest strategic shifts in the notes is the movement from staff augmentation to platform-led models. This is not just a consulting trend. It is a new delivery logic.
In the traditional model, a company sells hours, headcount, or managed capacity. In the platform-led model, the company sells a repeatable outcome delivered through software, workflows, data, and expert oversight. The labor does not disappear, but its role changes. People design the system, supervise exceptions, manage relationships, and handle ambiguity. Machines perform more of the repeatable execution.
The service model may evolve in stages:
Model | What it looks like | Strategic implication |
Staff augmentation | People are added to increase capacity | Scale depends heavily on hiring |
Platform-led delivery | Software structures work and people supervise | Margins improve when workflows repeat |
Machine-led, people-supported | AI executes more of the workflow and humans handle exceptions | Quality control and governance become core capabilities |
People supported by machines | Every employee has AI embedded into daily work | Productivity rises, but role definitions must change |
Group autonomous models | Multiple agents coordinate across teams and systems | The company competes on orchestration, data, and trust |
The big question is which companies will dominate this shift. It may be traditional services firms that successfully productize their expertise. It may be software firms that move deeper into managed outcomes. It may be new AI-native companies built from the start around machine-led delivery.
The new talent equation
AI will automate expertise, but it will also shift identity. Many professionals define themselves by what they know, how fast they can produce, and how reliably they can deliver. When AI can draft, summarize, compare, calculate, and recommend, the human value proposition moves upward.
The most important skills will include judgment, taste, client empathy, data literacy, workflow design, AI supervision, risk reasoning, and the ability to translate messy business problems into structured systems. This is why investing in young talent is critical. If entry-level roles are cut too aggressively, companies may damage the apprenticeship model that creates future leaders.
Workday’s AI agent research found that 75% of employees are ready to work with AI but not for it, while only 30% are comfortable being managed by an AI agent. Workday also found that nearly 90% of employees believe AI agents will help them get more done, but 48% worry productivity gains will increase pressure or reduce critical thinking.
That tension is the leadership challenge. Companies need to improve productivity without hollowing out learning, judgment, and trust. The right question is not “How many people can AI replace?” The better question is “How can AI make each level of the organization more capable while preserving the experiences that build expertise?”
The FDE model engineer and the new delivery role
The notes reference the FDE model engineer as an emerging topic of conversation. That role matters because AI transformation often fails in the gap between business process and technical implementation.
The future delivery model needs people who can sit between the client problem, the workflow, the model, the data layer, the integration stack, and the operating team. They need to understand business cases, prompt and model behavior, data portability, evaluation, risk controls, and change management. In many organizations, delivery will come through major platform partners, but the differentiator will be how well those platforms are adapted to real enterprise workflows.
This creates a new kind of builder: part field engineer, part solution architect, part model evaluator, part workflow designer, and part change agent. The companies that develop this talent early will have an advantage because they can move from AI demo to operational deployment faster.
The Four P’s: People, Process, Platform, and Price
The provided notes end with a useful operating framework: people, process, platform, and price. These four levers determine whether AI-agent transformation becomes a credible business case or another stalled initiative.

People
People strategy starts with role redesign. Leaders need to decide which tasks should be automated, which should be augmented, which require human judgment, and which new roles must be created. Investing in young talent matters because AI fluency will become a baseline skill, not a specialist skill.
Process
Process strategy defines how work moves. The goal is not to paste AI onto a broken process. The goal is to simplify approvals, encode compliance, create escalation paths, and make review-by-exception possible.
Platform
Platform strategy determines whether AI can safely access the right knowledge and systems. Data portability, secure corporate LLM access, permissioning, evaluation, logging, and integration with existing tools all matter. Without a platform layer, employees will route around the enterprise and use whatever works fastest.
Price
Price strategy determines whether the model scales economically. Location choice, cost savings, variable budgets, adaptive ramp-ups, and outcome-based pricing all become part of the AI business case. As services become software, pricing may shift from hours and seats toward usage, outcomes, workflows, and value delivered.
What leaders should do now
The next 12 months should not be treated as a waiting period. The companies that move too slowly will find that their best employees have already built unofficial AI workflows, while competitors are converting those workflows into governed systems.
Leaders should focus on five moves:
Map the work: Identify where experience is trapped in email chains, tribal knowledge, manual approvals, and undocumented workflows.
Choose bounded use cases: Start where tasks are repeatable, data is accessible, risk is manageable, and cycle-time improvement is measurable.
Create ownership: Assign accountable business owners for AI outcomes, not just technical owners for tool deployment.
Build the governance layer: Give employees secure tools, clear rules, auditability, escalation paths, and training before shadow AI becomes the default.
Redesign roles deliberately: Use AI to raise the quality of work, not just reduce the number of people doing it.
The companies that succeed will not ask employees to “use AI more.” They will redesign the work so that AI is naturally embedded into the flow of decisions, delivery, service, and learning.
The bottom line
AI agents are not simply another software category. They are a forcing function for enterprise redesign. They expose where knowledge is trapped, where process is slow, where governance is unclear, where pricing is outdated, and where talent models are fragile.
The next phase belongs to companies that can move from pilots to platforms, from personal productivity to operating leverage, and from committee-led caution to governed acceleration. In that world, every conversation may have a third voice. The question is whether that voice is an unmanaged tool on the side or a trusted part of the enterprise operating model.



Comments