Skip to main content

Overview

This guide walks you through a basic example of using NL2FOL to translate a natural language statement into first-order logic and detect logical fallacies.

Prerequisites

Before you begin, ensure you have:
  • Installed NL2FOL and its dependencies
  • Set up your OpenAI API key (for GPT models) or Llama model access
  • Installed the required NLI model

Quick Example

1

Import the NL2FOL class

Start by importing the necessary modules:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from nl_to_fol import NL2FOL
from openai import OpenAI

client = OpenAI()
2

Initialize the models

Set up your language model and NLI model:
# For GPT-4
model_type = 'gpt'
pipeline = None
tokenizer = None

# Initialize NLI model for entity relation detection
nli_model_name = "microsoft/deberta-large-mnli"
nli_tokenizer = AutoTokenizer.from_pretrained(nli_model_name)
nli_model = AutoModelForSequenceClassification.from_pretrained(nli_model_name)
3

Create an NL2FOL instance

Initialize the translator with your sentence:
sentence = "All politicians are corrupt. John is a politician. Therefore, John is corrupt."

nl2fol = NL2FOL(
    sentence=sentence,
    model_type=model_type,
    pipeline=pipeline,
    tokenizer=tokenizer,
    nli_model=nli_model,
    nli_tokenizer=nli_tokenizer,
    debug=True
)
4

Convert to first-order logic

Run the conversion pipeline:
final_lf, final_lf2 = nl2fol.convert_to_first_order_logic()

print("Logical Form 1:", final_lf)
print("Logical Form 2:", final_lf2)
The output will show:
  • Claim and implication extraction
  • Referring expressions
  • Entity mappings
  • Property relations
  • Final first-order logic formulas

Understanding the Output

When you run the example above with debug=True, you’ll see detailed output including:
Claim: All politicians are corrupt
Implication: John is corrupt

Complete Working Example

Here’s a complete script you can run:
nl2fol_basic.py
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from nl_to_fol import NL2FOL
from openai import OpenAI

client = OpenAI()

def basic_example():
    # Setup models
    model_type = 'gpt'
    pipeline = None
    tokenizer = None
    
    nli_model_name = "microsoft/deberta-large-mnli"
    nli_tokenizer = AutoTokenizer.from_pretrained(nli_model_name)
    nli_model = AutoModelForSequenceClassification.from_pretrained(nli_model_name)
    
    # Example sentence with a logical structure
    sentence = "All birds can fly. Penguins are birds. Therefore, penguins can fly."
    
    # Create NL2FOL instance
    nl2fol = NL2FOL(
        sentence=sentence,
        model_type=model_type,
        pipeline=pipeline,
        tokenizer=tokenizer,
        nli_model=nli_model,
        nli_tokenizer=nli_tokenizer,
        debug=True
    )
    
    # Convert to first-order logic
    final_lf, final_lf2 = nl2fol.convert_to_first_order_logic()
    
    print("\n" + "="*50)
    print("FINAL RESULTS")
    print("="*50)
    print(f"Logical Form 1: {final_lf}")
    print(f"Logical Form 2: {final_lf2}")
    
    return final_lf, final_lf2

if __name__ == "__main__":
    basic_example()

Key Parameters

sentence
string
required
The natural language statement to translate. This should contain a claim and implication structure.
model_type
string
required
The LLM backend to use. Options: 'gpt' or 'llama'
debug
boolean
default:"false"
Enable detailed logging of the translation pipeline steps
pipeline
Pipeline
HuggingFace text generation pipeline (required for Llama models, None for GPT)
tokenizer
Tokenizer
HuggingFace tokenizer (required for Llama models, None for GPT)
nli_model
Model
required
Natural Language Inference model for entity relation detection
nli_tokenizer
Tokenizer
required
Tokenizer for the NLI model

Next Steps

Model Backends

Learn how to switch between GPT-4 and Llama models

Custom Datasets

Process your own datasets for fallacy detection

SMT Solving

Use CVC5 to verify logical formulas

API Reference

Explore the complete NL2FOL API
The translation pipeline uses multiple LLM calls to extract claims, referring expressions, properties, and logical forms. With GPT-4, expect the process to take 10-30 seconds per sentence depending on complexity.

Build docs developers (and LLMs) love