Most companies are talking about AI. Very few are operationalizing it at scale.
Jeff X. Li, SVP of Technology and AI Strategy and Solutions at Daitrix, has spent over 20 years leading enterprise technology, data, and AI initiatives across B2B SaaS, retail, and manufacturing.
“AI fails when it’s treated like a tech experiment instead of a business capability,” says Li. “Leaders need to answer one question: where does AI materially impact revenue, cost, or customer experience?”
After two decades of implementing AI across industries, Li knows that operationalizing AI at scale requires starting with a business-led vision rather than technology capabilities, fixing data and infrastructure foundations before scaling, and embedding AI into operations where people actually use it rather than isolating it in innovation labs.
Starting With Business-Led Vision Rather Than Technology Capabilities
Most AI initiatives start with technology teams exploring what’s possible with large language models, computer vision, or predictive analytics. These explorations produce interesting pilots demonstrating technical feasibility without addressing whether the capability solves business problems worth solving.
Starting with business-led vision means identifying where AI materially impacts revenue, cost, or customer experience before evaluating which AI approaches might help. Revenue impact comes from AI that increases conversion rates, expands customer lifetime value, or enables new pricing models. Cost impact comes from AI that reduces operational expenses, automates manual processes, or prevents expensive failures. Customer experience impact comes from AI that personalizes interactions, reduces friction, or delivers faster resolution.
“Leaders need to answer one question: where does AI materially impact revenue, cost, or customer experience?” Li emphasizes.
Fixing Data and Infrastructure Foundations Before Scaling
AI can’t succeed on top of broken systems. That means governed data, API-ready platforms, and reducing technical debt on purpose.
Most organizations approach AI implementation by selecting tools, training models, and deploying applications without addressing underlying data quality issues, system integration challenges, or technical debt accumulated over the years.
This creates AI applications that produce unreliable outputs because training data is incomplete or inaccurate, can’t integrate with existing workflows because APIs don’t exist, or can’t scale because infrastructure wasn’t designed for AI workloads.
“Fix the foundation before you scale,” Li explains. “AI can’t succeed on top of broken systems.”
Governed data means establishing data quality standards, implementing data governance processes, and creating data architectures that make information accessible for AI training and inference. Without governed data, AI models train on incomplete or biased datasets, producing outputs that can’t be trusted.
“AI needs to live inside workflows, not next to them,” Li notes.
Fixing foundations before scaling costs time and budget upfront, but prevents expensive failures later when AI applications can’t integrate with production systems or deliver reliable results.
Embedding AI Into Operations Where People Actually Use It
AI only creates value when people actually use it in customer support, supply chain, pricing, and decision-making.
Most organizations isolate AI in innovation labs or pilot programs, separate from day-to-day operations. This creates AI demonstrations that impress executives without delivering business value because the people who would benefit from AI aren’t using it in their actual work.
“Embed AI into operations, not innovation labs,” Li explains.
Customer support teams need AI surfacing relevant knowledge articles, suggesting responses, or routing complex issues to specialists. Supply chain teams need AI to forecast demand, optimize inventory, or identify disruption risks. Pricing teams need AI to analyze competitive positioning, predict price elasticity, or recommend dynamic adjustments. Decision-making across functions needs AI surfacing insights from data too large for manual analysis.
Embedding AI into operations means integrating it into tools people already use rather than requiring them to switch to separate AI applications.
“AI only creates value when people actually use it,” Li emphasizes.
Getting the Operating Model Right
AI isn’t just a tools decision. It’s a leadership and alignment problem.
Most organizations treat AI implementation as a technology project managed by IT or data science teams. This misses that successful AI operationalization requires cross-functional alignment on priorities, clear ownership of business outcomes, and change management, helping people adopt new ways of working.
“Get the operating model right,” Li explains. “AI isn’t just a tools decision. It’s a leadership and alignment problem.”
Operating models that work establish executive sponsorship, ensuring AI initiatives have the support and resources needed to succeed, create cross-functional teams, define clear ownership for business outcomes, and implement change management, helping people understand how AI changes their work and why adoption matters.
Without a proper operating model, AI initiatives suffer from misaligned priorities where technology teams build capabilities that business teams don’t need, and a lack of ownership where nobody takes responsibility for business results.
Building Deliberate, Scalable Capability
“Operationalizing AI isn’t about chasing trends,” Li concludes. “It’s about building a deliberate, scalable capability that delivers real business results.”
Companies that chase AI trends implement whatever technology generates buzz without connecting it to business impact. Companies that build deliberate capability start with a business-led vision, fix foundations, embed AI into operations, and get operating models right.
Connect with Jeff X. Li on LinkedIn for insights on operationalizing AI across the enterprise.










