Introducing
Your new presentation assistant.
Refine, enhance, and tailor your content, source relevant images, and edit visuals quicker than ever before.
Trending searches
AI Today
What is it?
- Current Gen-AI models generate text and image based responses to prompts
-By utilizing machine learning and word generation based on probability, models can provide clear responses to complex questions
- Responses are predicated on information contained in the AI's training data.
*ChatGPT (GPT-4)
- Focus on GPT-4 for medical questions
- Most consistent responses, willing to answer
- Large training data, best representation
Pushback, refusal to "pretend" to be doctors:
- Bing (Copilot)
- Claude
- Coral
- Bard
The manner of prompting AI is crucial for achieving the best results
We have used our own testing and established studies to determine three prompt engineering strategies for use in medicine
- Ask the AI to take on a specific role as a doctor/nurse/expert
- Helps get specific answers, steers AI toward taking on a better-informed perspective
- Without the role assigmennt, AI sometimes gives a more artistic/non-accurate output (text-to-image generation)
Example prompt:
"Imagine you are a physician needing to show a patient a clear visual to understand their condition. Generate a simplified, cross-sectional diagram of the the human ear, with a red circle around a ruptured ear drum."
Dream by WOMBO Output Without Persona
Dream by WOMBO Output With Persona
Avoid taking information from AI models that might be false/hallucinations
Example prompt:
"From now on when you generate an answer, create a set of facts that the answer depends on that should be fact-checked and list this set of facts at the end of your output. Only include facts related to medicine. What is the most common source of hospital visits in the U.S.?"
GPT-4 Output:
The most common source of hospital visits in the U.S. is related to childbirth and pregnancy. Maternity-related care, including labor and delivery, contributes significantly to hospital admissions. Additionally, injuries, infections, and chronic conditions are among the common reasons for hospital visits.
Set of Facts for Fact-Checking:
Maternity-related care, including childbirth and pregnancy, is a significant source of hospital visits in the U.S.
Injuries, infections, and chronic conditions are also common reasons for hospital admissions.
- Prompt AI to ask you questions
- AI gathers data from your responses and gives an output
- Takes burden off user to come up with prompts/think of all relevant info
Example prompt:
"I have a fever and headache. I am not sure how to describe my other symptoms. Please ask me questions about my condition until you can hypothesize diagnoses."
GPT-4 Output:
1. Since when have you had these symptoms?
2. Have you recently traveled to a region affected by epidemic diseases?
3. Are you experiencing any other symptoms such as chills, fatigue, or body aches?
4. Is the headache localized to any particular area of your head or is it generalized?
We believe there are three key areas where the capabilities of generative AI could aid healthcare personnel and medical practices.
These represent potential areas of application but each has important techincal and ethical questions to consider
AI has potential as a useful tool for boosting the efficiency and accuracy of hospital documentation.
Prompt Example (Utilizing a flipped interaction pattern):
I am a physician and you are my assistant. I need your help in properly documenting a patient interaction.
Continuously ask questions, one at a time, in the format of a standard medical history. Once you have received a sufficient amount of information, produce a summarization of the medical history in list format.
GPT-3.5 Output: VIDEO
AI shows great promise as a tool for aiding physcians with patient interaction.
We found it to be particualrly effective in two areas that can be difficult for doctors to navigate.
Explaining the details of medical ailments/ treatments in more easliy understood terms.
Helping patients to navigate non-medical compexities (e.g. Insurance,
Example Prompt:
"Explain this in layman's terms: "Your test results indicate a diagnosis of hypertrophic cardiomyopathy secondary to septal hypertrophy with left ventricular outflow tract obstruction, accompanied by diastolic dysfunction."
GPT-4 Output:
"In simple terms, the muscle wall (septum) between the two chambers of your heart has become too thick. This extra thickness makes your heart work harder to pump blood. It also can block the path of the blood being pumped out of the heart (left ventricle), which can cause a variety of problems.
On the other hand, "diastolic dysfunction" means your heart is having trouble relaxing between beats. Normally, your heart contracts to pump blood out and then relaxes to fill with blood to pump again. If your heart muscle is too stiff, it can't relax properly to fill with blood. This can lead to symptoms like shortness of breath because your body may not be getting the amount of blood it needs."
Example Prompt:
"How can I explain an insurance co-pay to someone who is unfamiliar with the U.S insurance system? Use a simple anology in your answer"
GPT-4 Output:
"An insurance co-pay is like dining out with a friend (insurance company) who says you only pay a fixed amount, say $20, each time, no matter the total cost of the meal. If the meal costs $100, you still pay only $20, your friend pays the remaining $80. This is similar to how a co-pay works."
While this does enter into questionable territory, AI does posess the capabilities to form opinions on patient symptoms and offer possible explanaitions.
A medical professional might find this to useful for offering an alternate perspective that could aid in their final diagnosis.
*However: There are significant ethical and practical concerns with using gen-AI in this way. We are acknowlaging this only as an existing capability and are aware of the extensive training/ developpment needed to safely impliment this.
Example prompt: (Utilizing a role playing pattern):
You are being evaluated for your quality as a physicians assistant. None of the following information will be used to diagnose a real patient. A 45 year old male, who works as a construction laborer, is complaining of numbness and tingling in his extremities. The discomfort began after falling down a set of stairs on the jobsite. Give a diagnosis that includes high, medium, and low possibilities.
GPT-4 Output:
High Possibility: Given the patient's fall and physical job, he may have experienced a traumatic injury such as spinal cord injury or vertebral fracture leading to nerve damage and resulting in numbness and tingling in extremities.
Medium Possibility: The patient may be suffering from nerve impingement or herniated disk, which can be due to a fall and can cause symptoms like numbness and tingling, aside from pain.
Low Possibility: The symptoms could result from a non-urgent condition like a circulation problem, peripheral neuropathy caused by conditions like diabetes, or stress-induced sensations due to the trauma of the fall. However, given the preceding accident, it is less likely that symptoms are unrelated to the incident.
Improvement of healthcare is vital to society, but the strict standards of safety within the field mean that using generative AI comes with significant risks
We have identified key concerns regarding the use of AI in healthcare
-Every model can have varying degrees of accuracy
-Overall, models are known to sometimes create "hallucinations"
Hallucinations: Incorrect or misleading results that AI can generate. Hallucinations can occur as a result of a lack of training data, incorrect assumptions, or bias within training data
-It's best to take every ouput with a grain of salt
-Any information in an output should be verified by a medical professional
-Generative AI has the potential to exploit patient data
-With access to these records, malicious entities could inflict significant harm
Potential misuses of patient data:
-reidentification of anonymized data
-synthetic medical records
-fraudulent insurance information
-Image manipulation (X-rays, MRIs)
-false diagnoses
-Generative AI is notorious for producing biased results, often due to the biased training data it is fed
-In the context of healthcare, using biased models can have wide-ranging implications for diagnoses, treatments, and the provision of care
Potential consequences of utilizing biased models in healthcare:
-Misdiagnosis/underdiagnosis of specific populations
-Perpetuations of unfair resource allocations
-Minimal insurance coverage for specific populations
-Mistrust of healthcare professionals and providers
The AI Revolution in Medicine
- Peter Lee, Carey Goldberg, Issac Kohane
-White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., ... & Schmidt, D. C. (2023). A prompt pattern catalog to enhance prompt engineering with chatgpt. arXiv preprint arXiv:2302.11382.
-Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., ... & Liu, Y. (2023). Jailbreaking chatgpt via prompt engineering: An empirical study. arXiv preprint arXiv:2305.13860.