[MA 2026 03] Development and Evaluation of AI-driven decision support for professionals and chatbot for parents, in preventive youth healthcare (DAILY)
TNO, Child Health, Leiden
Proposed by: Olivier Blanson Henkemans [olivier.blansonhenkemans@tno.nl]
Introduction
Preventive child health care aims to support healthy growth, development, and wellbeing throughout childhood. Parents regularly seek guidance on development, lifestyle, and parenting, while preventive health professionals face increasing demands, complexity, and time pressure in their work. Digital support tools, including Artificial Intelligence (AI), are increasingly explored to support both parents and professionals by improving access to reliable information and strengthening decision-making in preventive care. [1]
Recent studies show that Large Language Models (LLMs) can support guideline interpretation and decision-making in healthcare, provided they are implemented responsibly and evaluated in real-world contexts.[2] At the same time, parents increasingly use generic AI tools for health information, which raises concerns about accuracy, safety, and trustworthiness. [3]
TNO addresses these challenges in the DAILY programme, by developing and evaluating two complementary AI applications in preventive child healthcare:
- AI-driven decision support for professionals, and
- AI-driven chatbot for (expectant) parents.
Description of the SRP Project/Problem
The project evaluates whether AI-based tools can safely and effectively support decision-making and information provision in youth health care. The main problem addressed is the lack of empirical evidence on the reliability, usability, safety, and trustworthiness of AI-driven decision support and chatbots when embedded in real preventive care workflows. While AI systems show technical potential, their actual value depends on alignment with professional work processes, parental needs, and strict privacy and safety requirements.[4] Within DAILY, both applications are developed using co-creation and evaluated in pilot settings to assess their practical added value and limitations.
Research questions
RQ1. To what extent do AI-generated recommendations for professionals align with existing youth health care guidelines and professional judgment?
RQ2. How usable and trustworthy do professionals perceive AI-driven decision support during consultations?
RQ3. To what extent do AI-chatbots provide accurate, understandable, and safe information to parents?
RQ4. How do parents experience the chatbot in terms of trust, usability, and perceived empowerment?
RQ5. What conditions are critical for responsible implementation and future scaling of AI applications in preventive child healthcare?
(These RQs are not exhaustive)
Expected results
The project is expected to deliver:
- Quantitative evidence on the accuracy and guideline conformity of AI-generated advice.
- Insights into usability, workflow fit, and trust among professionals.
- Evidence on safety, understandability, and perceived value of an AI chatbot for parents.
- Practical design and implementation guidelines for responsible AI use in preventive healthcare.
- Scientific and societal insights relevant for broader application of AI in public health contexts.
Time period
- November – June (x)
- May – November (x)
References
[1] Huizing AHJ, Eekhout I, van Buuren S, Blanson Henkemans OA. Data-driven healthcare innovations in a fragmented healthcare system: a modular approach. Stud Health Technol Inform. 2025;327:328–332.
[2] Moeskops I, van den Broek T, Huizing A, Blanson Henkemans OA. AI-beslisondersteuning in de zorg: De waarde van Large Language Models voor jeugdartsen: Een verkenning. Tijdschr Hum Factors. 2025;50(1).
[3] Bakker L. Exploration of AI in the client contact centre of the CYH. [internship master medical informatic], University of Amsterdam; 2025.
[4] Allard BM. User-centered design of an LLM-supported decision support system for clinical guidelines [master’s thesis]. Amsterdam: Faculty of Medicine, University of Amsterdam; 2025.