This guide walks you through a basic example of using NL2FOL to translate a natural language statement into first-order logic and detect logical fallacies.
# For GPT-4model_type = 'gpt'pipeline = Nonetokenizer = None# Initialize NLI model for entity relation detectionnli_model_name = "microsoft/deberta-large-mnli"nli_tokenizer = AutoTokenizer.from_pretrained(nli_model_name)nli_model = AutoModelForSequenceClassification.from_pretrained(nli_model_name)
3
Create an NL2FOL instance
Initialize the translator with your sentence:
sentence = "All politicians are corrupt. John is a politician. Therefore, John is corrupt."nl2fol = NL2FOL( sentence=sentence, model_type=model_type, pipeline=pipeline, tokenizer=tokenizer, nli_model=nli_model, nli_tokenizer=nli_tokenizer, debug=True)
4
Convert to first-order logic
Run the conversion pipeline:
final_lf, final_lf2 = nl2fol.convert_to_first_order_logic()print("Logical Form 1:", final_lf)print("Logical Form 2:", final_lf2)
Learn how to switch between GPT-4 and Llama models
Custom Datasets
Process your own datasets for fallacy detection
SMT Solving
Use CVC5 to verify logical formulas
API Reference
Explore the complete NL2FOL API
The translation pipeline uses multiple LLM calls to extract claims, referring expressions, properties, and logical forms. With GPT-4, expect the process to take 10-30 seconds per sentence depending on complexity.