AI-powered systems are enabling organizations to “stop problems before they’ve started,” says Julie Develin, senior partner for human insights at UKG in Lowell, Mass.
By identifying patterns and detecting anomalies in financial data, artificial intelligence is creating new opportunities for risk management while allowing leaders to take a proactive approach to preventing potential issues.
“I’ve seen AI work really well through UKG’s shift swapping capabilities, where our software will recommend what shift an employee should take or shouldn’t take. It will also provide information as to whether or not that person’s working too much,” Develin says.
HCM Podcast
Produced with Google Notebook LM Using AI Narration
AI-Powered System Can Detect Workforce Burnout Early
That type of proactive approach can yield numerous benefits.
“What that’s done for organizations is they’ve been able to stop problems before they’ve started — from an overtime standpoint, from an employee burnout standpoint, and that kind of thing.”
Similarly, AI can analyze employee sentiment to identify potential risk areas within an organization.
“Sentiment analysis is another thing where I’ve seen organizations utilize our software to understand what their employees are thinking day to day on a certain topic,” Develin notes. “Let’s say I log into our system and I ask my employees, ‘How are you feeling today?’ Just a simple question. Then that information will congregate, bring it together, and let’s say the word ‘stress’ is shown 25 times. That then tells me, as a leader, that I have a stress problem on my hands.”
This early detection capability enables targeted interventions.
“I’m [then] able to narrow that down and say, ‘Well, 20 of those people work in this plant, and they work for this manager. So why? What can I do now? It’s about being present futurists. It’s about looking at the problems today to understand how we can mitigate them for tomorrow — to prevent them.”
” AI-powered systems are enabling organizations to “stop problems before they’ve started,” ”
Pattern recognition in finance
Lisa Haydon, founder and CEO of Halifax-based Pivotal Growth, has witnessed how AI transforms pattern recognition capabilities across organizations, including financial operations.
“The way that AI allows us to get at patterns and correlations is in the people performance side,” Haydon says. “In the past, when you looked at doing either consulting or using assessments, you were constrained by the consultants’ capacity to handle Excel-based spreadsheets, especially if they were qualitative agencies.”
But sophisticated AI tools enable a more comprehensive analysis.
“We are now able to use both quantitative and qualitative data in the way we create insights for an organization, and so that allows some patterning and some much more robustness,” Haydon says.
This capability extends beyond HR applications to financial operations, where AI can identify trends and correlations that might signal risks or opportunities, she said.
“As a consultant, what we’re able to do is really get focused on that gap assessment on a future focus basis with much more clarity. We are able to get at patterns far clearer, and that means our clients get a much more prioritized and data-backed point of view.”
Haydon adds AI has transformed decision-making, moving it from opinion-based to more evidence-based.
“It was completely opinion based. And there’s still many organizations that make their leadership development investments based on opinion,” she says. “We’re actually taking this — what we have from data, insights, patterns, correlations — and then customizing it for an organization and being able to give them very specific insight.”
Detecting and addressing financial irregularities
When AI systems flag potential irregularities in financial data, a structured approach to human review becomes essential.
“Flagged items should be directed to human expert reviewers with sufficient contextual data from the AI,” Cristina Goldt, general manager of workforce and pay at Workday, says . “These reviewers can then analyze the information to distinguish between false positives and genuine problems, identifying root causes and determining necessary actions.”
The learning should flow both ways. “Importantly, what the human team learns should help the AI get better at spotting things correctly in the future,” she adds. “Keeping good records of all this helps keep things on track and helps us make the whole system even smarter over time.”
Develin agrees and recommends regular audits be part of this process.
“Workflows and audits, right? So ensuring that the right eyes are on the information, and then periodically auditing what your AI tools are putting out there to ensure fairness and balance and that everything’s working properly.”
The dangers of oversimplified AI usage
While AI offers powerful capabilities for financial risk management, there’s also a risk of organizations placing too much trust in these systems without understanding their limitations.
“Because it’s so easy to use and so easy to get an answer, employers can make a significant mistake or just rely on it when it’s not as sophisticated as it might be if it was with a proper prompt or a heavily customized tool,” Haydon says.
She adds AI tools typically provide about 80 per cent accuracy at best. “You have to understand how ChatGPT thinks — it is an algorithm, so it’s very linear and logical. If you don’t give it enough context and prompt engineering, you’ll get back exactly what it interprets.”
This underscores the need for human expertise to verify AI outputs. “That contextualization, that ability to spot the error — if you’re not intent on that, it is real. And so there is a risk,” Haydon says.
Building governance for AI in financial operations
To address these challenges, experts recommend establishing comprehensive AI governance frameworks — particularly for applications in financial risk management.
“When I ask leaders whether or not they have an ethical AI framework, most of them say, ‘No, we don’t have that yet,’” Develin says. “Just like any other policy that’s put into place for employee governance, we need to make sure that an AI policy, an ethical AI policy, is part of that.”
These frameworks should address not only how data is handled but also how AI outputs are used in decision-making. “It’s not only that the ethical AI policy is not just about the data that we put in, but again, it’s about how the organization is using the data that it receives as well, and what kind of transparency are they providing to employees and other stakeholders,” Develin says.
Industry-specific considerations are also crucial, she adds. “I would caution organizations when they’re looking at their ethical AI framework to really know their industry as well, know what sector they’re in, and to ensure that they are making sure the information being uploading [doesn’t offend the] laws that are pertinent to [their] industry.”
“From burnout to balance sheets, AI can flag problems before they grow” ?

Sign Up Today! HCM DIALOGUE is more than just a news source – it’s a place for Finance, HR and Payroll professionals to come together and share their expertise.
Leave a Reply
You must be logged in to post a comment.