The bash::framehead testing framework provides automated verification of all functions with detailed reporting and high test coverage.
Running tests
Execute the test suite on a compiled framework file:
./main.sh test ./compiled.sh
Tests run automatically and produce colored output:
=== bash::framehead functional smoke tests ===
--- string ---
PASS string::upper
PASS string::lower
PASS string::length
PASS string::contains (true)
PASS string::contains (false)
FAIL string::reverse
expected: olleh
actual: hello
--- array ---
PASS array::length
PASS array::first
SKIP array::experimental (not implemented)
=== Results: 659 passed, 1 failed, 8 skipped ===
=== Success rate: 99.8% (659/660) ===
Test helper functions
The tester() function in main.sh:146-1400+ provides three primary test helpers:
_test - Exact match testing
Compare actual output to expected output:
_test "test name" "expected" "actual"
Examples from main.sh:241-276:
_test "string::upper" "HELLO" "$( string::upper hello)"
_test "string::lower" "hello" "$( string::lower HELLO)"
_test "string::length" "5" "$( string::length hello)"
_test "string::contains (true)" "0" "$( string::contains hello ell; echo $? )"
_test "string::contains (false)" "1" "$( string::contains hello xyz; echo $? )"
_test "string::reverse" "olleh" "$( string::reverse hello)"
When testing exit codes, capture them with echo $? in a subshell as shown above.
_test_contains - Substring testing
Verify output contains a specific substring:
_test_contains "test name" "needle" "actual output"
Examples from main.sh:191-205:
_test_contains () {
local name = " $1 " needle = " $2 " actual = " $3 "
local clean = true
_check_stderr || clean = false
if [[ " $actual " == * " $needle " * ]] && $clean ; then
echo -e " $PASS $name "
(( passed ++ ))
else
echo -e " $FAIL $name "
[[ " $actual " != * " $needle " * ]] && \
echo " expected to contain: $needle " && \
echo " actual: $actual "
(( failed ++ ))
fi
}
Usage examples:
_test_contains "math::bc" "3.14" "$( math::bc "22/7" 2 )"
_test_contains "fs::append" "appended" "$( cat " $_tmp_file ")"
_test_contains "timedate::duration::relative" "ago" "$( timedate::duration::relative $(( $(timedate::timestamp::unix ) - 3600 )))"
_test_nonempty - Non-empty output
Verify function produces any output:
_test_nonempty "test name" "actual output"
Examples from main.sh:296-305:
_test_nonempty "string::uuid" "$( string::uuid )"
_test_nonempty "string::random" "$( string::random 8 )"
_test_nonempty "string::md5" "$( string::md5 hello)"
_test_nonempty "string::sha256" "$( string::sha256 hello)"
_test_nonempty "string::url_encode" "$( string::url_encode "hello world")"
_test_nonempty "string::base64_encode" "$( string::base64_encode hello)"
This is useful for:
Functions with platform-dependent output
UUID/hash generators
System introspection functions
_test_skip - Skip tests conditionally
Skip tests that require unavailable dependencies:
_test_skip "reason for skipping"
Examples from main.sh:495-547:
if math::has_bc ; then
_test_contains "math::bc" "3.14" "$( math::bc "22/7" 2 )"
_test "math::floor" "3" "$( math::floor 3.7 )"
_test "math::ceil" "4" "$( math::ceil 3.2 )"
_mark_tested math::bc math::floor math::ceil
else
_test_skip "math::bc (bc not available)"
_mark_tested math::bc math::floor math::ceil
fi
Always call _mark_tested for skipped functions to prevent them from appearing in the “untested functions” report.
Test organization
Tests are organized by module with clear section markers:
echo ""
echo "--- string ---"
_test "string::upper" "HELLO" "$( string::upper hello)"
_test "string::lower" "hello" "$( string::lower HELLO)"
_mark_tested string::upper string::lower
echo ""
echo "--- array ---"
_test "array::length" "3" "$( array::length a b c)"
_test "array::first" "a" "$( array::first a b c)"
_mark_tested array::length array::first
Marking tested functions
Track which functions have been tested to identify gaps in coverage:
_mark_tested function1 function2 function3
Example from main.sh:362-398:
_mark_tested string::upper string::lower string::length string::contains string::reverse \
string::snake_to_camel string::trim string::trim_left string::trim_right \
string::repeat string::is_integer string::is_float string::is_hex \
string::is_bin string::is_octal string::is_numeric string::is_alnum \
string::is_alpha string::is_empty string::is_not_empty \
string::starts_with string::ends_with string::matches
At the end of the test run, any functions not marked will be reported:
=== Untested functions ===
string::experimental_feature
math::advanced_operation
=== Results: 659 passed, 0 failed, 8 skipped, 2 untested ===
Testing patterns
Testing boolean functions
Capture exit codes in subshells:
# True case
_test "string::is_integer (true)" "0" "$( string::is_integer 42 ; echo $? )"
# False case
_test "string::is_integer (false)" "1" "$( string::is_integer abc; echo $? )"
Testing with temporary files
Create cleanup-safe temp resources:
local _tmp_dir _tmp_file
_tmp_dir = $( mktemp -d )
_tmp_file = $( mktemp " $_tmp_dir /test.XXXXXX" )
echo "hello world" > " $_tmp_file "
_test "fs::exists (true)" "0" "$( fs::exists " $_tmp_file "; echo $? )"
_test "fs::read" "hello world" "$( fs::read " $_tmp_file ")"
# Cleanup
rm -rf " $_tmp_dir "
From main.sh:698-816:
local _tmp_dir _tmp_file
_tmp_dir = $( mktemp -d )
_tmp_file = $( mktemp " $_tmp_dir /test.XXXXXX" )
echo "hello world" > " $_tmp_file "
local _tmp_link = "${ _tmp_dir }/link.txt"
ln -s " $_tmp_file " " $_tmp_link "
_test "fs::exists (true)" "0" "$( fs::exists " $_tmp_file "; echo $? )"
_test "fs::is_file (true)" "0" "$( fs::is_file " $_tmp_file "; echo $? )"
_test "fs::is_symlink (true)" "0" "$( fs::is_symlink " $_tmp_link "; echo $? )"
# cleanup
rm -rf " $_tmp_dir "
Handle functions that may succeed or fail based on environment:
_test "runtime::is_container" "0" \
"$( runtime::is_container ; r = $? ; [[ $r -eq 0 || $r -eq 1 ]] && echo 0 || echo 1 )"
This pattern:
Captures the exit code in r
Accepts both 0 (true) and 1 (false) as valid
Returns 0 for pass, 1 for fail (unexpected exit codes)
From main.sh:637-678:
_test "runtime::is_container" "0" \
"$( runtime::is_container ; r = $? ; [[ $r -eq 0 || $r -eq 1 ]] && echo 0 || echo 1 )"
_test "runtime::supports_color" "0" \
"$( runtime::supports_color ; r = $? ; [[ $r -eq 0 || $r -eq 1 ]] && echo 0 || echo 1 )"
_test "runtime::is_wsl" "0" \
"$( runtime::is_wsl ; r = $? ; [[ $r -eq 0 || $r -eq 1 ]] && echo 0 || echo 1 )"
Testing with background processes
local _bg_pid
sleep 5 &
_bg_pid = $!
_test "process::kill::graceful" "0" "$( process::kill::graceful $_bg_pid 2 ; echo $? )"
sleep 1 &
_bg_pid = $!
_test "process::signal" "0" "$( process::signal $_bg_pid TERM; echo $? )"
From main.sh:978-991:
local _bg_pid
sleep 5 &
_bg_pid = $!
_test "process::kill::graceful" "0" "$( process::kill::graceful $_bg_pid 2 ; echo $? )"
sleep 1 &
_bg_pid = $!
_test "process::signal" "0" "$( process::signal $_bg_pid TERM; echo $? )"
sleep 1 &
_bg_pid = $!
process::suspend " $_bg_pid "
_test "process::suspend" "0" "$( process::is_running $_bg_pid ; echo $? )"
process::resume " $_bg_pid "
process::kill " $_bg_pid " 2> /dev/null
Stderr checking
The test framework automatically checks for unexpected stderr output:
From main.sh:150-163:
local _stderr_log
_stderr_log = $( mktemp )
exec 3>&2 2> " $_stderr_log "
_check_stderr () {
local err
err = $( cat " $_stderr_log " )
> " $_stderr_log "
if [[ -n " $err " ]]; then
echo " stderr: $err "
return 1
fi
return 0
}
If a function writes to stderr unexpectedly, the test fails even if output matches.
Increasing test coverage with LLMs
The test suite includes guidance for using LLMs to expand coverage:
From main.sh:123-144:
## These are covered by LLMs
## You may not want to update this manually
## Unless you want to painstakingly go back and forth the files and recompile to fix test coverages.
##
## Recommended prompt:
## """
## Based on the following tester function:
## (copy this function)
## Can you update the function to maximise test coverage? Here is the output:
## (Insert test output ESPECIALLY the 'untested functions' section)
##
## Please do not change the structure of the tests, just add new ones.
## If you insist, you SHOULD ask first.
## """
##
## Upload the compiled single-file output (bash-framehead.sh) for full context.
## If the file is too large, upload individual module files one at a time.
##
## REMEMBER: YOU are still responsible. DO NOT leave LLMs fully agentic.
When using LLMs for test generation:
Always review generated tests manually
Verify tests actually exercise the intended functionality
Don’t blindly accept AI-generated code
Tests produce color-coded output (when run in a terminal):
From main.sh:165-173:
local PASS FAIL SKIP
if [[ -t 1 ]]; then
PASS = "\033[32mPASS\033[0m"
FAIL = "\033[31mFAIL\033[0m"
SKIP = "\033[33mSKIP\033[0m"
else
PASS = "PASS" ; FAIL = "FAIL" ; SKIP = "SKIP"
fi
Green PASS for successful tests
Red FAIL for failures with expected/actual diff
Yellow SKIP for conditionally skipped tests
Writing tests for new modules
When adding a new module, follow this checklist:
Add test section
Create a new section in the tester() function: echo ""
echo "--- yourmodule ---"
Test core functionality
Cover the main use cases: _test "yourmodule::basic" "expected" "$( yourmodule::basic input)"
Test edge cases
Include boundary conditions: _test "yourmodule::empty" "" "$( yourmodule::process '')"
_test "yourmodule::large" "0" "$( yourmodule::handle_large 999999 ; echo $? )"
Test error handling
Verify graceful failures: _test "yourmodule::invalid" "1" "$( yourmodule::validate bad_input; echo $? )"
Mark all functions tested
_mark_tested yourmodule::basic yourmodule::process yourmodule::validate
Next steps
Adding modules Create new modules for the framework
Contributing Submit your improvements to the project