From -16 to +60 NPS 🏆 Flutter’s AI webinar for CX success

Register now
Blog

PODCAST OVERVIEW

IBM’s AI Odyssey: A structured approach to AI ethics and implementation

The latest Generation AI podcast explores what’s needed when scaling AI systems in highly regulated industries

Paige Peterson

October 12, 20234 minutes

podcast cover for the Generation AI episode on needing an AI ethics framework for certain industries

Artificial intelligence (AI) and automation are being adopted across a multitude of industries — but how different AI systems are selected, designed, and implemented varies wildly depending on many factors. Where one company automates, another may face ethical concerns and risks that far outweigh the possible benefits.

For organizations in heavily regulated industries like finance or health, leveraging AI technology looks much different than it does in marketing or retail. Shobhit Varshney, VP and Senior Partner at IBM Consulting, joins the Generation AI podcast to discuss large-scale AI system and automation deployment within these high-regulation industries, things to consider when building out tech, and how to manage systems within the constraints of AI ethics.


Implementing AI is more than just pushing a button

The average consumer can log into an artificial intelligence program like ChatGPT, type in a prompt, and be on their way. But those in regulated industries like health or finance have a different set of considerations to discuss before bringing AI code into the business. Shobhit finds that the most consistent use cases for generative AI tools, in particular, revolve around automating the more tedious parts of very manual (but very long) processes.

“Think about a mortgage process or someone filing for an approval claim,” he says. “That entire process is very human, but it’s also very manual, and it breaks down into many tasks. For every task, we evaluate whether automation or AI would actually deliver with a high ROI with high accuracy.”

But again, it’s not that simple. There are questions business leaders must ask themselves to ensure they’re actually using AI solutions and implementing them ethically and responsibly:

  • Does generative AI bring a capability to the table that human beings can’t?
  • Is the accuracy and bias of AI manageable?
  • Are we getting enough ROI to make the implementation worth it?

The architecture of AI — and everything else

For some organizations, the answers to those questions and the overall ethics of artificial intelligence in their industry lie within their internal architectures and processes. Across highly regulated industries, there are multiple ways to address security, IP protection, and adherence with regulations like GDPR.

“Sometimes we bring in our Watson X platform to support companies with security measures completely on the premises worried about cyber risk, and sometimes we bring in a cloud-based model,” Shobhit says. “But there’s always a way to follow the regulatory compliance needed in each industry.”

The element of compliance that Shobhit is most concerned with, outside of data protection, is AI ethics. There’s an “insane amount of effort” that goes into examining every piece of generated content and ensuring it adheres to ethical, responsible AI principles, especially when it comes to tackling nuance.

 “Content grounding is a must-have here,” he says. “You want to make sure that the model you’re deploying understands the nuances of how they look at knowledge inside the organization. Any answer has to have enough context within it.”

To get out in front of this, Shobhit and his team are constantly working to build techniques that accurately understand the accuracy of large language models — while also sitting at the intersection of regulatory compliance and ethical AI use.

“There’s a lot of risk if you don’t take this seriously and ensure you’re thinking about ethics right up there with security,” Shobhit said. “We’re at a point where people are excited about AI, but they also want to see that it’s auditable.”


Despite ongoing concerns, the use of AI is paying dividends

The risk is worth the reward for many business leaders, according to Shobhit. A recent study he and his team conducted across 3,000 C-suites globally found that the use of artificial intelligence is no longer a point of hesitation — it’s a must-have with the caveat that it’s done sensibly and mindfully. 

“Seventy-five percent of them were very clear that they see increasing investments in narrative AI, and they’ve seen a boost in ROI,” he says. “We’re also seeing six times the improvement in ROI, but we’re also seeing over 50 percent saying they’re concerned about the lack of trust.”

What does this data say? To Shobhit, it says the community of people passionate about leveraging AI to make a difference in the way we work today must focus on creating enterprise layers around transparency and responsible AI. While there’s sometimes a hefty time investment up front used to train AI models to ethically and securely manage data, the risk is often well worth the reward.

“If we do a good job at creating the right layers around AI, that sets us up to accelerate really quickly,” says Shobhit. “The value unlocked by AI is significant if you get this right.”

Watch the episode to hear our full conversation with Shobit about the usefulness and ethics of AI at a high level.

Is the ethical, responsible use of AI tools the future of highly regulated industries?