LOS ANGELES WIRE   |

May 20, 2024
Search
Close this search box.

Munjal Shah’s Hippocratic AI Partners with Nvidia to Build More Empathetic Healthcare LLMs

Munjal Shah
Photo Courtesy: Munjal Shah

By: Hippocratic AI

Hippocratic AI CEO Munjal Shah said the partnership with Nvidia would produce conversational AI healthcare agents with faster response times.

When you talk to a doctor or nurse, you want to feel heard. Empathy is an important quality for healthcare professionals, but it’s not always easy for someone who sees dozens of patients daily to summon compassionate, deep listening on command.

Munjal Shah, founder of healthcare artificial intelligence startup Hippocratic AI, found this to be a substantial motivating factor in the company’s goal of building safe, nondiagnostic healthcare large language models that can converse with patients. The company has partnered with accelerated computing pioneer Nvidia to build super-low-latency response times into its LLMs, reducing response times to make its AI agents feel more conversational and empathetic. 

“With generative AI, patient interactions can be seamless, personalized, and conversational — but to have the desired impact, the speed of inference has to be incredibly fast. With the latest advances in LLM inference, speech synthesis, and voice recognition software, Nvidia’s technology stack is critical to achieving this speed,” said Shah. “We’re working with Nvidia to continue refining our technology to amplify the impact of our work of mitigating staffing shortages while enhancing access, equity, and patient outcomes.”

Hippocratic AI’s LLMs are the foundation of AI healthcare agents that can converse with patients over the phone for services such as chronic care nursing, patient navigation, and nutritional counseling. While popular LLM-powered chatbots like ChatGPT respond relatively quickly to user prompts, they aren’t built to mirror the pace of human conversation. Even taking 10 seconds to generate a response would feel like an eternity in a human-to-human conversation. Shah hopes that using Nvidia’s powerful tech, Hippocratic AI can reduce the time it takes its AI agents to respond, making the patient feel like they are talking to a more caring, humanlike responder, not a computer processing the conversation.

User testing has consistently demonstrated the connection between low-latency responses and perceived empathy. Given that LLMs operate on inference engines, the company has coined the term “empathy inference” to describe its low-latency LLM tech.

Testing Empathy and Responsiveness

The collaboration between Hippocratic AI and Nvidia has already resulted in notable results from large-scale user tests.

Internal research by Hippocratic AI reveals that every half-second improvement in response times can significantly enhance patients’ emotional engagement with AI healthcare agents. When inference times were reduced from over 3 seconds to 2.2 seconds, there were notable increases in patients feeling that the AI cared about them and their comfort level in confiding in the AI.

Responding to the question, “Did you feel this AI cared about you?” 84.3% of the 1,002 licensed nurses acting as patients answered affirmatively when the AI’s response time exceeded 3 seconds, compared to 88.2% when the AI responded within 2.2 seconds. When asked about their comfort level in confiding in the AI, 80.1% of participants said they felt comfortable when the AI’s end-to-end inference time was longer than 3 seconds. This figure rose significantly to 88.9% among those who experienced a response time of 2.2 seconds.

These results fit within a broader research program that has demonstrated the ability of AI chatbots to exhibit more empathy than human beings in specific contexts. A study published in 2023 by JAMA Internal Medicine found that chatbot responses were rated as 41% more empathetic than those from human physicians. The same study found that physicians were five times more likely than AI to deliver responses rated less than slightly empathetic. Chatbot replies were nine times more likely than humans to give responses rated as empathetic or very empathetic.

Munjal Shah emphasized that while demonstrating empathy is essential to building a healthcare LLM, these AI agents must be more than empathetic; they need to be safe. He pointed out that, in addition to demonstrating the importance of empathy inference, Hippocratic AI has conducted several tests showing its LLMs’ ability to outperform other LLMs and human healthcare workers on several critical safety tasks.

The company released data showing that it outperformed GPT-4, Google’s Llama 2 70B, and human nurses in tasks such as identifying the impact of medications on lab tests, identifying condition-specific disallowed over-the-counter drugs, and detecting toxic OTC drug doses. 

Technological Foundations 

The low-latency responsiveness of Hippocratic AI’s LLM relies on a combination of Nvidia’s accelerated computing and AI solutions. 

“Voice-based digital agents powered by generative AI can usher in an age of abundance in health care, but only if the technology responds to patients as a human would,” said Kimberly Powell, vice president of health care at Nvidia. “This type of engagement will require continued innovation and close collaboration with companies, such as Hippocratic AI, developing cutting-edge solutions.”

Hippocratic AI’s collaboration involves utilizing Nvidia’s low-inference stack and leveraging its H100 Tensor Core GPUs and NIM microservices to support the development and deployment of the AI agents. It will use Nvidia Riva models for automatic speech recognition and text-to-speech translation, customizing the models for healthcare-specific applications.

At the recently held Nvidia GTC global AI developer conference, Shah demonstrated Hippocratic AI models using the Nvidia Avatar Cloud Engine suite of technologies.

“Nvidia’s technology stack is critical to achieving the conversational speed and fluidity necessary for patients to naturally build an emotional connection with Hippocratic’s generative AI healthcare agents,” said Shah. “With generative AI, we can address some of the most pressing needs of the healthcare industry. We can help mitigate widespread staffing shortages and increase access to high-quality care — all while improving patient outcomes.”

Published by: Nelly Chavez

Share this article

Ambassador

This article features branded content from a third party. Opinions in this article do not reflect the opinions and beliefs of Los Angeles Wire.