Abstract
Large Language Models (LLMs) excel at many tasks but often struggle with complex, multi-step reasoning, leading to inconsistencies and hallucinations. Consequently, we propose a neural-symbolic integration framework that enhances LLM reasoning by incorporating formal knowledge—such as logical rules, ontologies, and knowledge graphs—into their CoT process. Our approach retrieves and integrates symbolic information to guide logical inference, resulting in more accurate and interpretable outputs. Experiments on compositional reasoning benchmarks demonstrate significant improvements over standard LLM methods. This work highlights the potential of neural-symbolic integration for developing more reliable and explainable AI systems in high-stakes applications.
Get full access to this article
View all access options for this article.
