{ "timestamp": "2026-03-09T15:30:45.123456", "error": "nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)", "diagnosis": "Port 80 is occupied by another process. Need to identify and stop the conflicting process.", "command": "sudo fuser -k 80/tcp && sudo service nginx start", "result": "nginx started successfully", "success": true}
The plan node uses memory to avoid repeating failed commands. From src/core/memory.py:57:
src/core/memory.py
def get_failed_commands(self, error: str) -> List[str]: error_lower = error.lower() error_keywords = set(error_lower.split()) failed = set() for ep in self.episodes: if not ep["success"]: ep_keywords = set(ep.get("error", "").lower().split()) if len(error_keywords & ep_keywords) >= 1: cmd = ep["command"].strip() failed.add(cmd) return list(failed)
Used in the plan node (src/agent/nodes/plan.py:37):
src/agent/nodes/plan.py
failed_commands = memory.get_failed_commands(error)if failed_commands: constraints += f"\n\nCOMANDOS QUE FALLARON ANTES (NO uses estos):\n" for cmd in failed_commands: constraints += f"- {cmd}\n"
def get_summary(self) -> str: total = len(self.episodes) successes = sum(1 for ep in self.episodes if ep["success"]) failures = total - successes return f"Total: {total} episodios, {successes} exitosos, {failures} fallidos"
from src.core.memory import memoryfrom src.core.knowledge import kb# 1. Check memory firstsimilar = memory.find_similar(error)if similar and similar["success"]: # Use the successful command from memory command = similar["command"] diagnosis = f"Solution from memory: {command}"else: # No successful match, use RAG diagnosis = kb.query(f"How to fix: {error}") # Avoid commands that failed before failed_commands = memory.get_failed_commands(error) # ... generate new command avoiding failed_commands# 2. Execute commandresult = ssh.execute_command(command)# 3. Save to memorymemory.save_episode( error=error, diagnosis=diagnosis, command=command, result=result[1], success=(result[0] == 0))
Memory matching uses keyword overlap, requiring at least 2 common keywords for a match. This prevents false positives while capturing genuinely similar errors.
Large memory files (thousands of episodes) may slow down similarity searches. Consider implementing periodic archiving for production deployments.