A persistent challenge in AI is the effective integration of material and formal inference - the former concerning the plausibility and contextual relevance of arguments, while the latter focusing on their logical and structural validity. Large Language Models (LLMs), by virtue of their extensive pre-training on large textual corpora, exhibit strong capabilities in material inference. However, their reasoning often lacks formal rigour and verifiability. At the same time, LLMs’ linguistic competence positions them as a promising bridge between natural and formal languages, opening up new opportunities for combining these two modes of reasoning. We introduce PEIRCE, a neuro-symbolic framework designed to unify material and formal inference through an iterative conjecture–criticism process. Within this framework, LLMs play the central role of generating candidate solutions in natural and formal languages, which are then evaluated and refined via interaction with external critique models. These critiques include symbolic provers, which assess formal validity, as well as soft evaluators that measure the quality of the generated arguments along linguistic and epistemic dimensions such as plausibility, coherence, and parsimony. While PEIRCE is a general-purpose framework, we demonstrate its capabilities in the domain of natural language explanation generation - a setting that inherently demands both material adequacy and formal correctness
ACL 2025 Demo paper: https://aclanthology.org/2025.acl-demo.2/
High-level overview of the framework:
To get you familiar with PEIRCE, we released a set of demonstrations showcasing the applicability of the framework to different NLI tasks and domains:
- Refinement with hard and soft critiques, link
- LLMs-Symbolic Explanation Refinement (with hard Isabelle critique), link
- Inference to the Best Explanation in Large Language Models (with soft critiques), link
- Hybrid Inductive Logic Programming (with hard Prolog critique), link
- Explanation Retrieval and Explanatory Unification Patterns, link.
To install all the required Python libraries for running PEIRCE, clone the repository locally and execute the following command:
pip install -r requirements.txtTo integrate different explanation-centred NLI datasets with PEIRCE, we implemented a separate Python package called SSKB: pip install sskb
To use the soft critique models, first install spaCy’s English model by running:
python -m spacy download en_core_web_smSome critique models use external solvers that need a separate installation. To install prolog solver, please follow the instructions below.
sudo add-apt-repository ppa:swi-prolog/stable
sudo apt-get update
sudo apt-get install swi-prolog
pip install -U pyswipTo install Isabelle, please follow the instructions below.
Download Isabelle2023 in your working directory (e.g., Desktop):
wget https://isabelle.in.tum.de/website-Isabelle2023/dist/Isabelle2023_linux.tar.gz
tar -xzf Isabelle2023_linux.tar.gz --no-same-ownerAppend Isabelle2023's bin directory to your PATH
export PATH=$PATH:/workspace/Isabelle2023/bin Download Isabelle2023/2024 for macOS from the official website: https://isabelle.in.tum.de/
Append Isabelle2023's bin directory to your PATH
export PATH=$PATH:/Users/user/Desktop/Isabelle2023.app/binWhen using isabelle-client inside Jupyter, both Jupyter and isabelle-client rely on asyncio, requiring nested event loops to be enabled. This step is not necessary when running isabelle-client from standalone Python scripts outside of Jupyter.
import nest_asyncio
nest_asyncio.apply()
import os
original_path = os.environ.get('PATH', '')
new_path = original_path + ':/workspace/Isabelle2023/bin'
os.environ['PATH'] = new_path
print(os.environ['PATH'])
Set your api_key in the config.yaml file to use the generative models.
If you find this repository useful, please consider citing our demo paper.
@inproceedings{quan-etal-2025-peirce,
title = "{PEIRCE}: Unifying Material and Formal Reasoning via {LLM}-Driven Neuro-Symbolic Refinement",
author = "Quan, Xin and
Valentino, Marco and
Carvalho, Danilo and
Dalal, Dhairya and
Freitas, Andre",
editor = "Mishra, Pushkar and
Muresan, Smaranda and
Yu, Tao",
booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.acl-demo.2/",
doi = "10.18653/v1/2025.acl-demo.2",
pages = "11--21",
ISBN = "979-8-89176-253-4"
}
