File System Tools
File system tools enable agents to interact with files within their sandboxed environments. All operations are scoped to the session directory for security.
Core File Operations
view_file
Read the content of a file within the session sandbox.
Path to the file relative to session root
max_size
integer
default: "10485760"
Maximum file size in bytes (default: 10MB)
result = view_file(
path = "data/config.json" ,
workspace_id = "ws_123" ,
agent_id = "agent_456" ,
session_id = "session_789"
)
if result[ "success" ]:
content = result[ "content" ]
print ( f "File size: { result[ 'size_bytes' ] } bytes" )
print ( f "Lines: { result[ 'lines' ] } " )
write_to_file
Create a new file or append content to an existing file.
Path to the file relative to session root
Content to write to the file
Whether to append to the file instead of overwriting
Create File
Append to File
result = write_to_file(
path = "output/results.json" ,
content = json.dumps(data, indent = 2 ),
workspace_id = "ws_123" ,
agent_id = "agent_456" ,
session_id = "session_789"
)
list_dir
List directory contents with file metadata.
result = list_dir(
path = "data" ,
workspace_id = "ws_123" ,
agent_id = "agent_456" ,
session_id = "session_789"
)
for item in result[ "items" ]:
print ( f " { item[ 'name' ] } : { item[ 'type' ] } ( { item[ 'size' ] } bytes)" )
Search Operations
grep_search
Search file contents using regex patterns.
Regex pattern to search for
Directory or file to search
result = grep_search(
pattern = "error|warning" ,
path = "logs/" ,
workspace_id = "ws_123" ,
agent_id = "agent_456" ,
session_id = "session_789"
)
for match in result[ "matches" ]:
print ( f " { match[ 'file' ] } : { match[ 'line' ] } : { match[ 'text' ] } " )
File Modification
replace_file_content
Replace text in a file using string substitution.
result = replace_file_content(
path = "config.yaml" ,
old_text = "debug: false" ,
new_text = "debug: true" ,
workspace_id = "ws_123" ,
agent_id = "agent_456" ,
session_id = "session_789"
)
apply_diff
Apply a diff patch to a file.
diff_content = '''
--- a/file.txt
+++ b/file.txt
@@ -1,3 +1,3 @@
line 1
-line 2
+line 2 modified
line 3
'''
result = apply_diff(
path = "file.txt" ,
diff = diff_content,
workspace_id = "ws_123" ,
agent_id = "agent_456" ,
session_id = "session_789"
)
apply_patch
Apply a unified patch file.
result = apply_patch(
path = "src/" ,
patch_content = patch_data,
workspace_id = "ws_123" ,
agent_id = "agent_456" ,
session_id = "session_789"
)
Command Execution
Execute shell commands within the sandbox.
This tool executes arbitrary shell commands. Use with caution and proper input validation.
result = execute_command_tool(
command = "ls -la" ,
workspace_id = "ws_123" ,
agent_id = "agent_456" ,
session_id = "session_789" ,
timeout = 30
)
if result[ "success" ]:
print (result[ "stdout" ])
print ( f "Exit code: { result[ 'exit_code' ] } " )
Data Management
save_data
Save large data to disk for later retrieval.
Simple filename like ‘results.json’ (no paths)
String data to write (typically JSON)
Absolute path to the data directory
import json
data = { "users" : [ ... ], "total" : 1000 }
result = save_data(
filename = "users.json" ,
data = json.dumps(data, indent = 2 ),
data_dir = "/workspace/data"
)
print ( f "Saved { result[ 'size_bytes' ] } bytes to { result[ 'filename' ] } " )
load_data
Load previously saved data with pagination.
Absolute path to the data directory
Byte offset to start reading from
Maximum bytes to return (default: 10KB)
result = load_data(
filename = "users.json" ,
data_dir = "/workspace/data"
)
data = json.loads(result[ "content" ])
print ( f "Loaded { result[ 'bytes_read' ] } bytes" )
if result[ "has_more" ]:
next_offset = result[ "next_offset_bytes" ]
list_data_files
List all data files in the session.
result = list_data_files( data_dir = "/workspace/data" )
for file in result[ "files" ]:
print ( f " { file [ 'name' ] } : { file [ 'size' ] } bytes" )
CSV Operations
csv_read
Read rows from a CSV file.
result = csv_read(
path = "data/users.csv" ,
limit = 100 ,
offset = 0
)
for row in result[ "rows" ]:
print (row)
csv_write
Write a new CSV file.
headers = [ "name" , "email" , "age" ]
rows = [
[ "Alice" , "[email protected] " , 30 ],
[ "Bob" , "[email protected] " , 25 ]
]
result = csv_write(
path = "output/users.csv" ,
headers = headers,
rows = rows
)
csv_append
Append rows to an existing CSV file.
new_rows = [
[ "Charlie" , "[email protected] " , 35 ]
]
result = csv_append(
path = "output/users.csv" ,
rows = new_rows
)
csv_sql
Query CSV files using SQL (DuckDB).
result = csv_sql(
path = "data/sales.csv" ,
query = "SELECT region, SUM(amount) FROM sales GROUP BY region"
)
for row in result[ "rows" ]:
print ( f " { row[ 'region' ] } : $ { row[ 'SUM(amount)' ] } " )
Excel Operations
excel_read
Read rows from an Excel sheet.
result = excel_read(
path = "data/report.xlsx" ,
sheet = "Sales" ,
limit = 100
)
excel_write
Write a new Excel file.
data = {
"Sheet1" : {
"headers" : [ "Name" , "Value" ],
"rows" : [[ "A" , 1 ], [ "B" , 2 ]]
}
}
result = excel_write(
path = "output/report.xlsx" ,
data = data
)
excel_sql
Query Excel sheets using SQL (DuckDB).
result = excel_sql(
path = "data/report.xlsx" ,
query = "SELECT * FROM Sheet1 WHERE Value > 10"
)
PDF Operations
pdf_read
Extract text from PDF files.
result = pdf_read( path = "documents/report.pdf" )
for page in result[ "pages" ]:
print ( f "Page { page[ 'number' ] } : { page[ 'text' ][: 100 ] } ..." )
print ( f "Total pages: { result[ 'total_pages' ] } " )
Security
All file operations are sandboxed to the session directory. Path traversal attempts are blocked.
Path Security
from aden_tools.tools.file_system_toolkits.security import get_secure_path
# Safe path resolution
secure_path = get_secure_path(
path = "data/file.txt" ,
workspace_id = "ws_123" ,
agent_id = "agent_456" ,
session_id = "session_789"
)
# Returns: /workspace/ws_123/agent_456/session_789/data/file.txt
# Blocked attempts
get_secure_path( "../../etc/passwd" , ... ) # Blocked
get_secure_path( "/etc/passwd" , ... ) # Blocked
Best Practices
Always Read Before Modifying
Use view_file to inspect content before applying changes with replace_file_content or apply_diff.
Use Data Tools for Large Results
When tool results exceed context limits, use save_data to write to disk and load_data to retrieve with pagination.
Always validate paths are within the expected directory structure before operations.
Specify the correct encoding parameter when reading non-UTF-8 files.