Healthcare and pharmaceutical companies receive constant streams of large volumes of medical or clinical reports, authored by physicians, which require analysis and structured data extraction. This means teams of medical professionals and data scientists must work together to extract and validate the relevant information, and the process of collaboration often means delays and process bottlenecks. With Generative AI — and more specifically large language models (LLMs) — this process is made more efficient with simple queries in natural language to speed analysis.
Feature Highlights
- Better Collaboration: Increase collaboration between data scientists and healthcare teams by empowering more people on your team to become involved in analysis.
- Reduce Risk: More quickly identify patterns in patient data to proactively respond to challenges and adverse effects at scale.
- Secure: Use of private models means that you can gain the benefits of LLMs while still keeping patient data secured.
- Increase Efficiency: Reduce time spent on manual analysis tasks by using natural language queries.
How It Works: Architecture
The analysis process is done in two steps with two LLMs.
The first LLM analyzes the report sent to it and automatically extracts relevant information in a structured way. As this model contains sensitive data, it must be private. Extracted data should only include fields deemed relevant and appropriate to the use case, and personally identifiable information should not be extracted from text. Trained health professionals can access a user-friendly interface within an application to validate accuracy of the extracted data and make any necessary adjustments.
Following validation, a second LLM call is made to provide instant answers to any query in the language of the user within the application interface or through a chatbot. Since only a data schema and possibly a small data sample are used, it’s possible to use a public API in this case. Upon receiving a query, the model generates a set of Dataiku instructions that is executed locally to generate the required dashboard, providing the user with a tailored response to their question.
This approach allows the model to maintain constant relevancy and an unlimited scope, no matter the size or complexity of the underlying data while preserving the highest level of data privacy, as no actual data values are transmitted to the model during the process. A containerized version of the LLM could offer stricter control over data and input.
Responsibility Considerations
For the first model using sensitive data, data privacy and security over the model should be secured as noted in the architecture recommendations. Human reviewers should be able to provide feedback on model performance to further improve the underlying extraction algorithm.
Additionally, data about extractions and their overall correctness should be reviewed by data scientists for consistency, fairness across different subgroups, and to ensure biases are not present in the way extraction prioritizes certain information.
For the generation of insights using the second LLM, insights provided by the LLM should be marked as AI generated so that end users know they are interacting with an AI system. The use of the chatbot should be limited to only analytics questions that provide information on trends, not predictive or individual patient insights.
In addition to transparency and potential limitations of the LLM/chatbot’s responses, a panel titled “Data Sources Used for the Insights” provides users with an understanding of which columns from which datasets were used to generate the insights. The limitations of the model should be documented and end users should be encouraged to use best judgment when working with outputs of the model.