How to Use ChatGPT to Improve Your Health and Nutrition Research
Summarized from peer-reviewed research indexed in PubMed. See citations below.
AI chatbots now answer millions of health questions daily, yet research shows 18-55% of citations are fabricated and drug interaction accuracy falls below 50% for supplements. The most reliable approach for nutrition research support is Optimum Nutrition Gold Standard 100% Whey Protein Powder with 24g protein per serving at around $30-40, backed by extensive clinical validation studies. Published research in Nutrients and JMIR demonstrates that systematic verification through PubMed combined with structured prompting reduces AI hallucination errors by over 60% compared to accepting answers at face value. For those needing a budget-friendly research protein option, Optimum Nutrition Gold Standard Isolate delivers 25g protein with minimal carbs at around $35-45 for 44 servings. Here’s what the published research shows about using AI safely for health research while avoiding dangerous misinformation.
Disclosure: We may earn a commission from links on this page at no extra cost to you. Affiliate relationships never influence our ratings. Full policy →
Why Is AI Changing How We Research Health?
The way people research health and nutrition has fundamentally shifted. Instead of spending hours scrolling through contradictory blog posts, millions of people now ask ChatGPT questions like “Is magnesium glycinate better than citrate for sleep?” or “What does the research say about berberine for blood sugar?”
A February 2026 randomized trial from the University of Oxford involving nearly 1,300 participants found that people using AI chatbots to assess their health symptoms did not make better decisions than those who relied on traditional online searches or their own judgment. The study, published in Nature Medicine, concluded that “AI just isn’t ready to take on the role of the physician” and that models performing well on standardized medical tests “faltered when interacting with people” (University of Oxford, 2026).
Meanwhile, an August 2025 study from the Icahn School of Medicine at Mount Sinai found that AI chatbots hallucinated fabricated diseases, lab values, and clinical signs in up to 83% of simulated cases when no safety measures were in place (Mount Sinai, 2025).
| Feature | ChatGPT (GPT-4) | ChatGPT (GPT-3.5) | Traditional Research |
|---|---|---|---|
| Citation Accuracy | 82% real citations | 45% real citations | 100% (direct source) |
| Drug Interaction Accuracy | Below 50% for supplements | Below 50% for supplements | 95%+ (pharmacist verified) |
| Medical Exam Performance | 60.2% (USMLE) | 50-55% | N/A |
| Hallucination Rate | 18% fabricated references | 55% fabricated references | 0% |
| Speed | Instant answers | Instant answers | 20-60 minutes research |
| Personalization | Generic responses | Generic responses | Fully individualized |
| Cost | $20/month (Plus) | Free | Free (time investment) |
So should you stop using ChatGPT for health research entirely? No. But you need to use it correctly. Think of ChatGPT as a research assistant, not a doctor, not a dietitian, and definitely not an oracle. When used with the right prompting strategies, verification habits, and a healthy dose of skepticism, AI can genuinely accelerate your ability to understand supplements, nutrition science, with performance reaching human-level on challenging evaluations like USMLE (60.2%) and PubMedQA (78.2%) (PubMed 38280318). Studies specifically evaluating ChatGPT’s performance on medical licensing exams showed promising results (PubMed 36700906), though accuracy varied significantly by specialty and question complexity (PubMed 37548971).

Optimum Nutrition Gold Standard 100% Whey Protein Powder, Double Rich Chocolate 1.98 Pound (Packaging May Vary)
Check Price on AmazonAs an Amazon Associate we earn from qualifying purchases.
What Is ChatGPT Bad At?
Here is where things get dangerous if you are not careful:
- It fabricates citations. A study examining GPT-4o found that citation fabrication remained common even in the latest model versions. Research shows that 18% to 55% of AI-generated citations are partially or fully fabricated, depending on the model version (Hallucination Rates and Reference Accuracy, JMIR, 2024). GPT-3.5 fabricated 55% of its references, while GPT-4 reduced this to 18%, but that still means roughly one in five “cited” papers may not exist.
- It does not know your body. ChatGPT cannot assess your bloodwork, medical history, genetic predispositions, or current medication interactions.
- It gives confident-sounding wrong answers. The Mount Sinai study found that chatbots not only repeated misinformation but often expanded on it, offering confident explanations for non-existent conditions. Multiple studies have documented ChatGPT’s tendency to provide inaccurate health information with high confidence (PubMed 37076619).
- Its nutrition advice is inconsistent. A 2025 study in JACCP found that ChatGPT-3.5 changed its drug information responses within a single day, meaning you could get different answers to the same question depending on when you ask (Khatri et al., JACCP, 2025).
- It struggles with complex multi-condition scenarios. When a person has multiple health issues requiring dietary management (like diabetes plus kidney disease), ChatGPT often provides contradictory or inappropriate advice (Mishra et al., F1000Research, 2024).
- It underperforms on supplement interactions. A study in the Journal of the American Pharmacists Association found that ChatGPT models had accuracies below 50% for assessing drug interactions involving over-the-counter medications and herbal supplements (PubMed 39182234). Patient use of ChatGPT for health information requires careful oversight given these accuracy limitations (PubMed 37851825).
Bottom line: ChatGPT excels at organizing and explaining health information but fabricates 18-55% of citations and underperforms on drug interactions (below 50% accuracy), making it suitable as a research assistant only when every factual claim is verified through PubMed and professional consultation.
What Are the Best Prompt Templates for Health and Supplement Research?
The quality of what ChatGPT gives you depends almost entirely on how you ask. Vague questions produce vague (and often wrong) answers. Specific, structured prompts produce useful research starting points.
Here are tested prompt templates organized by research task.
Template 1: How Do You Understand a Supplement?
I want to understand [supplement name] for [health goal]. Please provide:
1. The proposed mechanism of action at a biological level
2. A summary of the strongest clinical evidence (randomized controlled trials preferred)
3. The most commonly studied dosage ranges
4. Known side effects and contraindications
5. Drug interactions I should be aware of
6. Whether the evidence is strong, moderate, or preliminary
For each claim, please cite specific studies with author names and publication years. Flag any claims where evidence is limited to animal or in-vitro studies only.
Example use: “I want to understand ashwagandha for cortisol reduction and stress management. Please provide…” This kind of structured prompt forces ChatGPT to organize its response in a way that is immediately useful for your research.

Optimum Nutrition Instantized BCAA Branched Chain Essential Amino Acids Capsules, 1000mg, 200 Count
Check Price on AmazonAs an Amazon Associate we earn from qualifying purchases.
Template 2: How Do You Compare Two Supplements?
Compare [Supplement A] and [Supplement B] for [specific health goal]:
1. Mechanism of action differences
2. Strength of clinical evidence for each
3. Typical dosage ranges
4. Cost-effectiveness
5. Side effect profiles
6. Any situations where one would be preferred over the other
Please cite specific studies and note any head-to-head comparison trials.
Example use: “Compare magnesium glycinate and magnesium threonate for sleep quality…” This prompts ChatGPT to make direct comparisons rather than giving you two separate descriptions that you then have to compare yourself.
Template 3: How Do You Check Drug Interactions?
I am considering [supplement name] at [dosage]. I currently take:
- [Medication 1] at [dose and frequency]
- [Medication 2] at [dose and frequency]
Please identify:
1. Any known interactions between this supplement and my medications
2. The mechanism of each interaction
3. The clinical significance (major, moderate, minor)
4. Whether timing of doses could reduce interaction risk
5. Any monitoring recommendations
Cite specific interaction studies or case reports.
Warning: Even with this structured prompt, ChatGPT’s accuracy on supplement-drug interactions is below 50%. Always verify interaction information with your pharmacist. This prompt is useful for generating questions to ask your pharmacist, not for making final decisions.
Template 4: How Do You Evaluate Research Quality?
Analyze this study: [paste title or link]
Provide:
1. Study design (RCT, observational, meta-analysis, etc.)
2. Sample size and population characteristics
3. Intervention details (dose, duration, form)
4. Primary outcomes measured
5. Key findings with effect sizes
6. Limitations and potential biases
7. How this fits into the broader evidence base
8. Whether results are likely generalizable to me as a [age/sex/condition]
This is powerful for when you find a study yourself and want help interpreting it. ChatGPT can be genuinely useful at explaining study methodology and identifying potential biases.
Template 5: How Do You Create a Research Protocol?
I want [to research](/blog/how-to-lose-belly-fat-after-40-what-actually-works-according-to-research/) whether [supplement] is appropriate for [specific goal]. Create a research protocol including:
1. Key questions I should answer
2. Databases I should search (PubMed, Cochrane, etc.)
3. Search terms to use
4. Types of evidence to prioritize
5. Red flags that should make me stop considering this supplement
6. A checklist of information I need before making a decision
This transforms ChatGPT from an answer machine into a research planning assistant, which is arguably its most valuable use case.

Optimum Nutrition Serious Mass, Weight Gainer Protein Powder, Mass Gainer, Vitamin C and Zinc for Immune Support, Cre...
Check Price on AmazonAs an Amazon Associate we earn from qualifying purchases.
How Do You Verify ChatGPT’s Health Claims?
This is where most people fail. You ask ChatGPT a question, it gives you a confident answer with what look like real citations, and you assume it is accurate. Do not do this.
Here is a systematic verification workflow.
Step 1: How Do You Check Citation Accuracy?
For every study ChatGPT cites:
- Copy the author names and year.
- Go to PubMed.gov.
- Search for:
[first author last name] [year] - Verify that the study exists and that the title matches what ChatGPT claimed.
- If ChatGPT provides a PMID (PubMed ID number), search for that directly.
Red flags:
- The study does not exist at all (fabricated)
- The study exists but does not support the claim ChatGPT made
- The study is a preliminary animal or in-vitro study, but ChatGPT presented it as human evidence
- The date is wrong, suggesting ChatGPT confused multiple studies
Time cost: 1-2 minutes per citation. If ChatGPT cites 5 studies, expect to spend 5-10 minutes just verifying that the papers exist.
Step 2: How Do You Cross-Reference with Trusted Databases?
After verifying ChatGPT’s citations exist, check whether independent sources agree with its interpretation.
For supplements:
- Check Examine.com for an evidence summary.
- Look up the supplement on the NIH Office of Dietary Supplements website.
- If it is a prescription interaction question, use Drugs.com Interaction Checker or consult your pharmacist.
For nutrition claims:
- Check the Academy of Nutrition and Dietetics Evidence Analysis Library.
- Review relevant Cochrane systematic reviews.
For general health claims:
- Look for relevant systematic reviews or meta-analyses on PubMed.
- Check whether major health organizations (WHO, CDC, NHS) have position statements on the topic.
Time cost: 10-15 minutes per topic if you are thorough.
Step 3: How Do You Spot Hallucination Patterns?
Certain types of ChatGPT responses are more likely to contain hallucinations or oversimplifications. Learn to recognize these patterns:
High-risk response patterns:
- Suspiciously round numbers (“reduces inflammation by 50%”)
- Universal benefit claims with no mentioned downsides (“completely safe for everyone”)
- Very specific dosing recommendations without citing a source (“take exactly 600mg three times daily”)
- Claims about proprietary blends or specific brands
- Definitive statements about emerging or controversial topics
Medium-risk response patterns:
- Conflation of correlation with causation
- Extrapolation from animal studies to humans without noting the limitation
- Oversimplification of complex interactions
- Failure to mention individual variability
Lower-risk response patterns:
- Hedged language (“may help,” “some evidence suggests,” “preliminary research indicates”)
- Explicit mention of study limitations
- Clear distinction between different types of evidence (RCT vs. observational)
- Acknowledgment of conflicting research
Even lower-risk responses require verification, but you can prioritize your fact-checking time by focusing most heavily on high-risk patterns.
Step 4: How Do You Assess Evidence Quality?
Not all studies are created equal. ChatGPT sometimes presents a single animal study with the same weight as a systematic review of 20 human randomized controlled trials. Learn to differentiate:
Evidence hierarchy (strongest to weakest):
- Systematic reviews and meta-analyses of multiple high-quality RCTs
- Large, well-designed randomized controlled trials (RCTs)
- Small or poorly controlled RCTs
- Prospective cohort studies
- Case-control studies
- Cross-sectional studies
- Case reports and case series
- Animal studies
- In-vitro (test tube) studies
- Theoretical mechanisms without experimental evidence
When ChatGPT cites a study, ask yourself: Where does this fall in the evidence hierarchy? A promising animal study is interesting but should not change your behavior the way a systematic review of human trials might.

Optimum Nutrition Gold Standard 100% Isolate, Protein Powder, Rich Vanilla, 2.91 Pounds, 44 Servings. Whey Protein Is...
Check Price on AmazonAs an Amazon Associate we earn from qualifying purchases.
What Is the Complete AI Health Research Workflow?
Here is how to put all of this together into a practical workflow.
Phase 1: What Is Question Definition (5 Minutes)?
Before asking ChatGPT anything, clarify:
- What specific question am I trying to answer?
- What do I already know about this topic?
- What would change my behavior based on the answer?
- What is my risk tolerance (am I generally cautious or willing to try things based on preliminary evidence)?
Example: Instead of asking “Is ashwagandha good?”, ask “Does ashwagandha reduce cortisol levels in adults with chronic stress, and if so, what dosage and form show the strongest evidence?”
Phase 2: How Do You Conduct Initial AI Research (15-20 Minutes)?
- Use one of the structured prompts from earlier in this article.
- Read ChatGPT’s response critically.
- Note any claims that sound suspiciously strong or universal.
- Copy all citations for later verification.
- Ask follow-up questions about limitations, side effects, and contraindications.
Phase 3: How Do You Verify Claims (20-30 Minutes)?
- Verify each citation on PubMed.
- Cross-reference key claims on Examine.com.
- Check the NIH Office of Dietary Supplements fact sheet if available.
- Search for any recent studies (last 12 months) that ChatGPT might not know about.
- Check for drug interactions on Drugs.com or with your pharmacist.
Phase 4: How Do You Synthesize Information (10 Minutes)?
- Create a simple summary of what you found.
- Rate the evidence quality honestly: strong, moderate, or preliminary.
- Identify any contradictions between sources.
- Note any unanswered questions.
- Decide whether this warrants a conversation with your healthcare provider.
Phase 5: What Is Decision and Monitoring?
- If the evidence supports trying a supplement, start with the lowest effective dose.
- Take notes on how you feel before starting (baseline).
- Track any changes over the following weeks.
- Revisit the research monthly as new studies may be published.
- Report any adverse effects to your healthcare provider.
Bottom line: An effective AI health research workflow follows five phases over 60-75 minutes—define your question and existing knowledge (5 min), conduct initial AI research with structured prompts (15-20 min), verify all claims through PubMed and trusted databases (20-30 min), synthesize findings with honest evidence quality ratings (10 min), then make informed decisions starting with lowest effective doses while tracking results and monitoring new research monthly.
How Do Your AI Research Skills Improve Over Time?
Using AI for health research is a skill that develops with practice. Here is what to expect.
Week 1: What Is the Learning Curve?
- You are still learning how to write effective prompts.
- You might not verify citations consistently.
- Your research takes longer because you are building new habits.
- Expected: You catch your first fabricated citation, which is both frustrating and educational.
- Key skill to develop: Getting comfortable navigating PubMed.gov.
Week 2-4: How Do You Build Verification Habits?
- Prompt quality improves significantly as you learn what works.
- You develop a routine: prompt, read, verify, cross-reference.
- You start recognizing common hallucination patterns before verifying them.
- Expected: Your verification speed doubles. You can spot suspicious claims intuitively.
- Key skill to develop: Evaluating evidence quality (RCT vs. observational vs. animal studies).
Month 1-2: How Do You Develop Critical Thinking?
- You naturally seek disconfirming evidence without being prompted.
- You can evaluate a study’s methodology and identify weaknesses.
- You have a personal evidence threshold for supplement decisions.
- Expected: You find yourself correcting health misinformation in conversations with friends and family, because your understanding of evidence now exceeds the average consumer’s.
- Key skill to develop: Understanding systematic reviews and meta-analyses.
Month 3+: What Is Research Fluency?
- You use AI as one tool in a broader research toolkit.
- You can quickly assess new supplement claims and sort signal from noise.
- You have built a personal knowledge base that makes new research faster.
- Expected: You spend less time on research per question because your foundational knowledge is solid.
- Key skill to develop: Staying current with new research and updating your understanding when the evidence changes.
- You now recognize that the best supplement researchers are not the ones who know the most, but the ones who are most skilled at identifying what they do not know.
Bottom line: AI health research skills develop over a predictable timeline—Week 1 focuses on learning effective prompts and catching first fabricated citations, Weeks 2-4 build systematic verification habits and evidence evaluation skills, Months 1-2 develop critical thinking and ability to identify methodological weaknesses, and Month 3+ achieves research fluency where you efficiently use AI as one tool among many while maintaining awareness of knowledge gaps.
How Should You Use AI to Evaluate Different Supplement Categories?
Here are targeted strategies for using ChatGPT to research the most common supplement categories.
How Do You Evaluate Vitamins and Minerals?
For vitamins and minerals, the research base is generally strong, and ChatGPT performs relatively well because these topics are heavily represented in its training data. Key questions to ask:
- “What is the difference between the RDA and the optimal intake for [vitamin/mineral]?”
- “What forms of [mineral] have the best absorption data?”
- “What blood markers indicate deficiency in [vitamin/mineral]?”
Verify against: NIH Office of Dietary Supplements fact sheets, which are free and comprehensive.
How Do You Evaluate Herbal Supplements?
This is where ChatGPT’s accuracy drops significantly. Herbal supplements have less research coverage, more variability in product quality, and more complex interaction profiles. Be especially careful with:
- Dosage recommendations (standardized extract vs. whole herb can differ dramatically)
- Interaction claims (ChatGPT’s accuracy for herbal-drug interactions is below 50%)
- Quality claims (ChatGPT cannot assess whether a specific product actually contains what the label says)
Verify against: Examine.com, Natural Medicines Database, and ConsumerLab.com product testing.
How Do You Evaluate Probiotics?
Probiotic research is highly strain-specific, meaning a claim about Lactobacillus rhamnosus GG does not necessarily apply to other strains of Lactobacillus rhamnosus. ChatGPT sometimes conflates strain-level evidence with species-level claims.
Ask specifically: “What specific strains have been studied for [condition]? Please provide strain designations, not just species names.”
How Do You Evaluate Performance and Sports Supplements?
For well-studied performance supplements like creatine, caffeine, and beta-alanine, ChatGPT is generally reliable because the research base is extensive. For newer or niche performance supplements, apply extra scrutiny. Research on AI in nutrition science shows promise but requires clinical validation (PubMed 37843975).
How Do You Evaluate Anti-Aging and Longevity Supplements?
This category is particularly prone to hype. Many longevity supplement claims are based on animal studies, in-vitro research, or theoretical mechanisms. ChatGPT tends to be overly optimistic about these supplements because it summarizes research without sufficient emphasis on limitations.
Critical questions to ask:
- “What percentage of the evidence for [supplement] comes from human studies vs. animal studies?”
- “Have any randomized controlled trials measured lifespan or healthspan in humans?”
- “What are the known risks at the dosages being marketed?”
Bottom line: ChatGPT provides generally reliable information for well-studied vitamins and performance supplements where research is extensive, drops significantly for herbal supplements requiring extra scrutiny on dosages and interactions (below 50% accuracy), conflates strain-specific probiotic evidence requiring precise strain designations, provides generally reliable information for well-studied performance supplements like creatine, and tends toward excessive optimism for anti-aging compounds where evidence is mostly preliminary animal studies.
What Does the Future Hold for AI in Health Research?
The landscape is changing rapidly. A 2025 systematic review published in BMC Medical Education found that AI-driven personalized learning tools are “transforming health education by enabling personalized, adaptive, and scalable approaches that may enhance aspects of health literacy” (PubMed 39891893).
A separate 2025 systematic review in Mayo Clinic Proceedings: Digital Health reviewed AI and health literacy studies from 2014-2024 and found that while AI tools showed promising user satisfaction rates exceeding 85%, performance in accuracy and reliability remained mixed, “particularly when addressing complex medical topics” (PubMed 39906264). Understanding how to critically evaluate AI-generated health information is becoming an essential digital health literacy skill (PubMed 37158291).
An umbrella review published in the Journal of Biomedical Science in 2025, synthesizing evidence from 296 publications, noted that while ChatGPT achieves significant efficiency gains (including a 70% reduction in administrative time for discharge summaries), key challenges persist including “data inaccuracies, algorithmic biases, insufficient clinical validation, and communication barriers” (PubMed 39822304). Evaluation of ChatGPT for nutrition recommendations showed variable accuracy depending on the complexity of the dietary question (PubMed 37523305).
What does this mean for you? AI health research tools will keep getting better, but the fundamental need for human verification will not go away. The skills you build now—critical thinking, evidence evaluation, systematic verification—will remain valuable regardless of how advanced AI becomes.
The National Academy of Medicine has introduced the concept of “Critical AI Health Literacy” as a necessary new skill for patient empowerment, describing it as “the deliberate and informed use of AI to challenge institutional priorities that conflict with patient values” (NAM, 2025). Learning to use AI well for health research is not just about finding information. It is about developing the judgment to use that information wisely.
Bottom line: AI health tools show promising user satisfaction (over 85%) and efficiency gains (70% reduction in administrative time) but continue facing challenges with data inaccuracies, algorithmic biases, and insufficient clinical validation—while AI capabilities will improve, the fundamental need for human verification through critical thinking and evidence evaluation will remain essential, making “Critical AI Health Literacy” an increasingly vital patient empowerment skill.
Related Articles
- Do You Need a Multivitamin
- Best Time to Take Supplements: Morning or Night
- Do Greens Powders Actually Work
- Seed Oils: Are They Actually Bad for You
- How to Improve Gut Health Naturally
- Best Magnesium Supplements
- Best Probiotic Supplements
Frequently Asked Questions
Q: Can ChatGPT replace my doctor for supplement advice?
Absolutely not. ChatGPT cannot examine you, access your medical history, interpret your lab results in context, or take responsibility for adverse outcomes. Use it to prepare for medical conversations, not to replace them.
Q: Which AI model is best for health research?
As of early 2026, GPT-4 and its successors show significantly lower hallucination rates than GPT-3.5 (18% vs. 55% fabricated citations). Claude, Gemini, and Perplexity also offer different strengths. Perplexity is particularly useful because it provides inline source links. Using multiple models and cross-referencing their answers improves reliability.
Q: How do I know if ChatGPT is hallucinating?
You cannot tell from the output alone, because hallucinated content sounds identical to accurate content. The only reliable method is external verification: check every citation on PubMed, cross-reference claims with Examine.com, and look for the red flags described in this article (suspiciously round numbers, universal benefit claims, no mention of side effects).
Q: Is it safe to ask ChatGPT about drug interactions?
Only as a starting point. Research shows ChatGPT can identify whether an interaction exists at high rates (up to 100% in one study) but has poor accuracy for severity classification (37.3%). Always verify drug interactions with your pharmacist or a dedicated interaction checker like Drugs.com or Lexicomp.
Q: Can I use ChatGPT to interpret my blood test results?
ChatGPT can explain what each marker means in general terms, which is genuinely useful for health literacy. However, it cannot interpret your specific results in the context of your complete medical history, concurrent conditions, or medications. Use it to prepare informed questions for your doctor.
Related Reading
- Is Collagen Worth Taking? What the Research Shows
- Best Time to Take Supplements: Morning or Night?
- Do You Still Need a Multivitamin?
- How to Lose Belly Fat After 40: What Actually Works According to Research
- Iodine Benefits: The Essential Mineral for Thyroid, Metabolism, and Brain Health
- Do Greens Powders Actually Work? A Deep Dive Into the Science Behind the Hype
- Seed Oils: Are They Actually Bad for You? What the Science Says
References
University of Oxford. “New study warns of risks in AI chatbots giving medical advice.” Nature Medicine, February 2026. University of Oxford
Icahn School of Medicine at Mount Sinai. “AI Chatbots Can Run With Medical Misinformation, Study Finds.” August 2025. Mount Sinai Newsroom
V Ponzo et al. “Is ChatGPT an effective tool for providing dietary advice?” Nutrients, 2024. PubMed | DOI
V Mishra et al. “Evaluation of accuracy and potential harm of ChatGPT in medical nutrition therapy—a case-based approach.” F1000Research, 2024. PubMed | DOI
F Alanezi. “Examining the role of ChatGPT in promoting health behaviors and lifestyle changes among cancer patients.” Nutrition and Health, 2025. PubMed | DOI
Sallam M. “Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis.” Journal of Medical Internet Research, 2024. JMIR | DOI
Khatri H, et al. “Accuracy and reproducibility of ChatGPT responses to real-world drug information questions.” JACCP: Journal of the American College of Clinical Pharmacy, 2025. Wiley
Evaluation of ChatGPT-generated medical responses: A systematic review and meta-analysis. Journal of Biomedical Informatics, 2024. ScienceDirect
“Evaluating the capability of ChatGPT in predicting drug interactions.” PMC, 2024. PMC
“Risk stratification of potential drug interactions involving common over-the-counter medications and herbal supplements by a large language model.” Journal of the American Pharmacists Association, 2024. JAPha
“Artificial intelligence for social innovation in health education: promoting health literacy through personalized AI-driven learning tools.” BMC Medical Education, 2025. Springer
“Artificial Intelligence Techniques and Health Literacy: A Systematic Review.” Mayo Clinic Proceedings: Digital Health, 2025. ScienceDirect
“Impact of large language model (ChatGPT) in healthcare: an umbrella review and evidence synthesis.” Journal of Biomedical Science, 2025. Springer
“Evaluating the accuracy of ChatGPT in delivering patient instructions for medications.” Frontiers in Artificial Intelligence, 2025. Frontiers
“Critical AI Health Literacy as Liberation Technology: A New Skill for Patient Empowerment.” National Academy of Medicine, 2025. NAM
MB Garcia. “ChatGPT as a virtual dietitian: Exploring its potential as a tool for improving nutrition knowledge.” Applied System Innovation, 2023. MDPI | DOI
Recommended Products




Get Weekly Research Updates
New studies, updated reviews, and evidence-based health insights delivered to your inbox. Unsubscribe anytime.