Zurich 2024

Kick-off Meeting, 4–5 July 2024

Photo album about the event.

The kick-off meeting launched the Neurosymbolic AI for Medicine network, bringing together researchers to define how neural learning and symbolic reasoning can be combined for safer, more transparent medical AI. Participants discussed definitions and boundaries of NeSy systems, their importance for integrating diverse medical data and clinical knowledge, and the challenges of bias, explainability, and regulation. The network agreed to produce a white paper and roadmap, and build a shared platform for collaboration and resources—laying the foundation for a global community advancing trustworthy, knowledge-driven AI in healthcare.

Agenda

Day 1 — Internal Network Kick-off Meeting UZH Main Building, Rämistrasse 71, Room KOL-F-123

11:00 – 12:30Introductions and Discussion
Definition of Neuro-Symbolic AI in Medicine; challenges, opportunities, and the motivation for establishing the network.

12:30 – 13:30Lunch

13:30 – 14:30White Paper Brainstorming
Outline, structure, responsibilities, and workplan for a community white paper.

14:30 – 15:30Network Activities Discussion
Ideas for publicity, special journal issues, future meetings, grant opportunities, and community growth.

15:30 – 16:00Coffee Break

16:00 – 17:30Case Studies Deep Dive
Exploring clinical and medical scenarios suitable for NeSy-AI methods; current progress and research gaps.

19:00+Dinner at Osteria Borgo


Day 2 — AMS Network & DSI Health Workshop

DSI Event Room, Rämistrasse 69, 8001 Zürich
Part of the DSI Health Seminar Series: “Machine Learning in Health that Matters” Event link

Morning Session

09:00 – 09:10Welcome — Ernesto & Janna
09:10 – 09:30Introduction to the AMS Network NSAI-Med — Ernesto & Janna
09:30 – 10:00Neural-symbolic Knowledge Representation with Ontology and KG Embeddings — Jiaoyan Chen (Slides)
10:00 – 10:30What do Knowledge Graph Embeddings Actually Learn? — Heiko Paulheim
10:30 – 11:00Coffee Break
11:00 – 11:20Neurosymbolic Techniques for Relation Extraction from Text — V. Raghava Mutharaju (Slides)
11:20 – 12:40Neuro-symbolic Approaches for Knowledge Graph Refinement and Explanations — Claudia D’Amato (Slides)
11:40 – 12:00BioBLP: A modular framework for representation learning over biomedical knowledge graphs — Michael Cochez (Slides)
12:00 – 12:30Considerations for Pediatric AI/ML from a Global Health Perspective — Joel Schamroth
12:30 – 13:30Lunch

Afternoon Session — Hackathon / Promptathon

13:30 – 14:00 — Topic pitches: LLMs and Knowledge Graphs for Medicine
14:00 – 16:00 — Group work sessions (with coffee at 15:00)
16:00 – 16:30 — Feedback presentations from groups

Evening Session

17:00 – 18:00Keynote: “Machine Learning in Health that Matters – Building an AI-Ready World” — Leo Anthony Celi
18:00 – 19:00Apéro and Networking
19:30+Dinner at Brasserie Johanniter


Outcomes and Discussions

The meeting established a shared vision for Neurosymbolic AI in Medicine:
to develop hybrid systems that combine the learning power of neural models with the transparency, reasoning, and domain knowledge of symbolic approaches—toward safer, fairer, and more trustworthy medical AI.

Defining Neurosymbolic AI

Participants discussed how to define neurosymbolic AI (NeSy-AI), highlighting the tension between narrow definitions—where neural and symbolic components are tightly integrated—and broader interpretations that include systems such as knowledge-graph embeddings or neural models interacting with symbolic reasoners.

The group agreed that LLMs are not inherently neurosymbolic, though they can become so when combined with reasoning or planning modules. Six categories of NeSy systems were reviewed, ranging from loosely coupled hybrids to hypothetical, fully integrated architectures.


Why NeSy-AI is Essential in Medicine

NeSy-AI can address core limitations of current deep-learning systems by enabling:

  • Integration of diverse data sources (imaging, text, genomics, clinical records)
  • Incorporation of biomedical knowledge (ontologies such as SNOMED, UMLS, Gene Ontology)
  • Transparent and verifiable reasoning, essential for clinical safety and regulation
  • Improved performance with small or biased datasets through symbolic constraints and transfer learning
  • Multimodal and process-level reasoning for complex care pathways and causal relationships

Challenges and Barriers

Key obstacles identified include:

  • Limited regulatory clarity and liability frameworks for AI-assisted decision-making
  • Poor data capture and documentation, with little clinician incentive to record structured information
  • Disconnect between ontologies and clinical reality, reducing applicability
  • Risk of systematic bias and errors in both human and AI decision processes

Opportunities and Use Cases

NeSy-AI offers promising applications such as:

  • Human-in-the-loop systems maintaining clinician control
  • Explainable medical imaging and artefact detection
  • Drug repurposing and personalised medicine
  • Clinical documentation assistance and multimodal reasoning
  • Symbolic bias-mitigation and fairness checking

Discussion on Bias (Day 2)

The Bias Working Group meeting, led by researchers from City, University of London, the Universities of Bari and Liverpool, Harvard, and others, focused on examining and demonstrating bias in large language models (LLMs) within clinical contexts. The group aims to show how identical prompts can yield inconsistent or biased outputs, including cases where minor changes such as mentioning a patient’s race affect diagnostic suggestions. Members discussed testing LLM reliability using false or misleading statements, evaluating prompt formats (zero-shot vs. dialogue modes), and assessing how patient history influences responses. They proposed developing guidance for clinicians on safe and effective prompt engineering, alongside a qualitative framework to label prompts and responses according to their risk of bias or inconsistency. Relevant recent studies and resources were shared to inform this work.


Network Plans and Actions

  • Draft a white paper and roadmap defining priorities and opportunities for NeSy-AI in medicine
  • Organise a workshop with leading researchers (target: 2025)
  • Build a community platform (GitHub, website, social media) for datasets, teaching materials, and collaboration
  • Pursue funding opportunities (EPSRC, COST, HDR UK, EU Horizon) and foster consortia for larger projects
  • Conduct case studies on clinical scenarios (e.g., preterm care, imaging artefacts) to demonstrate impact