MERLIN: Models for Explainable Reasoning and Learning through INtegration

Current Artificial Intelligence (AI) excels in specific tasks but has limitations in generalization, in explainability and reasoning. Symbolic AI, while offering transparency, is less robust in dynamic environments. This project studies neurosymbolic AI as an integration of learning and representation logic, with the goal of establishing guidelines for the development of interpretable and adaptable intelligent systems.

Funding: PRA, Università Digitale Pegaso.