Exclusive: Rohini Sharma on monday.com's AI shift
Artificial intelligence is now a standard expectation in enterprise software, with Australian organisations no longer swayed by the mere presence of AI features. Instead, buyers are demanding clear evidence that the technology delivers real productivity gains and makes day-to-day work easier, according to monday.com.
"AI has moved from a differentiator to a baseline expectation. Australian organisations no longer ask if a platform has AI - they assume it does. The question now is: does it actually make work easier?" said Rohini Sharma, Head of GTM (Service and Development) - APJ at monday.com.
Sharma said buyer scrutiny has intensified as AI becomes embedded in mainstream enterprise platforms. "The shift is toward practical value. Buyers want to see how AI helps people plan better, prioritise smarter, and spend less time on manual tasks. They're evaluating whether AI fits naturally into existing workflows, not whether it exists as a feature in a slide deck," she said.
"The maturity of the conversation has really changed. Australian buyers are more informed and less susceptible to hype. They're asking harder questions earlier; about governance, data security, and what successful adoption looks like in their environment. There's less tolerance for tools that promise transformation but require heavy change management or unclear ROI," added Sharma.
"The organisations winning AI evaluations today aren't the ones with the most features. They're the ones showing tangible, day-one impact with clear paths to scale, and doing it in a way that builds trust, not dependency on vendor promises," she said.
Scaling adoption
Beyond pilots and proofs of concept, Sharma argues that successful AI adoption is visible in routine behaviour rather than innovation showcases.
"Successful AI adoption is evident in small, everyday behaviours rather than in pilot programs or innovation showcases. When AI is genuinely working at scale, people stop talking about it as a separate initiative and start relying on it as part of how they plan work, prioritise tasks, and make decisions throughout the day," said Sharma.
"One of the biggest signals is ownership. AI pilots cannot remain the domain of innovation or IT teams if it is going to scale. Real adoption occurs when responsibility and capability extend across functions. When non-technical teams become comfortable using AI without guidance, and when responsibility for outcomes sits with the business, not just the platform or the tech team. That breadth of usage is what distinguishes experimentation from operational reality," she said.
Consistency is another marker. "If AI is only used by a handful of power users, it hasn't really landed. When it's embedded across functions and improving the same fundamentals everywhere, faster delivery, better quality, and less manual effort, that's when organisations move beyond experimentation," added Sharma.
"At that point, success isn't measured by novelty. It's measured by whether teams trust the outputs, use them repeatedly, and feel more confident in how they work as a result. That's what scaling actually looks like in practice," she said.
Proving ROI
With boards and executive teams demanding clearer returns, Sharma said early indicators of success often sit outside formal financial metrics.
"ROI matters, but the most reliable early signals don't appear on a finance dashboard; they appear in how people work," said Sharma.
"Before AI delivers top-line impact, smart leaders look for changes in how time is being spent: less manual coordination, fewer handoffs, less effort stitching work together across disconnected tools. Are teams still using AI three months in, or did adoption drop off after launch? When it sticks, it's because people feel more in control, clearer about priorities, make fewer avoidable mistakes, and spend more time on work that actually matters," she said.
"The investments that scale are the ones that reduce rework, improve consistency, and create clearer ownership across teams. Those improvements are measurable, even before they translate to revenue. They're also what separates a pilot that looks good in a deck from a platform that actually delivers," added Sharma.
"The logic is simple: when employees get tools that genuinely make their jobs easier, organisational impact follows. Better delivery outcomes. Stronger operational credibility. More sustainable performance over time. The financial returns come, but they come because the work improved first," she said.
Workflow redesign
According to Sharma, organisations achieving day-to-day AI impact are rethinking workflows before layering in new technology.
"Organisations that make AI useful day to day don't start with the technology; they start with the work. Instead of asking 'where can we add AI?', they look at where work consistently slows down, gets duplicated, or relies too heavily on manual coordination, and redesign those moments first," said Sharma.
"In practice, this means moving AI out of innovation sandboxes and into the systems people already use. AI is used to surface insights at the point of decision, automate routine coordination, and reduce the need for constant manual follow-ups. That's what turns AI into a day-to-day capability rather than an innovation tool," she said.
"This is where many pilots stall. Innovation teams can prove capability, but scaled impact only happens when AI supports cross-functional workflows, not just individual productivity. That often requires simplifying processes, reducing tool sprawl, and agreeing on shared ways of working before AI is layered in," added Sharma.
"When organisations get that right, AI doesn't feel like a separate initiative. It becomes part of how work moves through the business every day, removing friction rather than demanding attention," she said.
Data discipline
As AI initiatives expand, Sharma said fragmented technology stacks and poor data governance are emerging as critical barriers.
"AI has made complexity much harder to hide. As soon as organisations try to apply AI across real workflows, the cracks in fragmented tech stacks become obvious: disconnected tools, inconsistent data, and duplicated effort all limit what AI can actually do," said Sharma.
"Instead of layering AI on top of already complex environments, companies are prioritising platforms with native AI capabilities, where intelligence is built in rather than bolted on. This reduces integration overhead and ensures AI can efficiently operate across workflows rather than in isolation. Removing duplicate systems and simplifying processes creates a more linear, systematic environment where AI can operate at scale," she said.
"Data readiness is absolutely foundational to successful AI adoption, and it's where most AI initiatives stall. Even the most advanced AI systems cannot deliver value if the underlying data is inconsistent, poorly structured, or unmanaged," added Sharma.
"What's often missed is that data readiness is as much an organisational challenge as a technical one. It requires clear ownership, governance, and behavioural change across the organisation. Without alignment on how data is created, used, and maintained, AI initiatives struggle to move beyond limited use cases," she said.
"When organisations treat data as a shared operational asset, not just an IT responsibility, and invest in both technical preparation and behavioural adoption, AI can be scaled effectively and deliver measurable impact across workflows and teams," said Sharma.