scan4all supports storing scan results in Elasticsearch for centralized logging, analysis, and long-term retention. This enables powerful querying and visualization of security findings across multiple scans.
Why Use Elasticsearch?
Centralized Storage Store all scan results in a single location for easy access
Powerful Queries Search and filter results using Elasticsearch query DSL
Historical Data Track vulnerabilities over time and monitor remediation
Team Collaboration Share findings across security teams with centralized access
Quick Start with Docker
The fastest way to get started is using the provided Docker setup:
Step 1: Create Required Directories
Step 2: Start Elasticsearch Container
Docker Run
docker-compose.yml
docker run --restart=always \
--ulimit nofile=65536:65536 \
-p 9200:9200 \
-p 9300:9300 \
-d --name es \
-v $PWD /logs:/usr/share/elasticsearch/logs \
-v $PWD /config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v $PWD /config/jvm.options:/usr/share/elasticsearch/config/jvm.options \
-v $PWD /data:/usr/share/elasticsearch/data \
hktalent/elasticsearch:7.16.2
Step 3: Initialize Indices
This script creates the necessary indices for different scan result types:
nmap_index - Port scan results
nuclei_index - Nuclei vulnerability findings
httpx_index - HTTP probe results
vscan_index - Vulnerability scan results
Each tool stores results in a separate index for better organization and querying.
Edit config/config.json:
{
"enableEsSv" : true ,
"esUrl" : "https://127.0.0.1:8081/%s_index/_doc/%s" ,
"esthread" : 8
}
Field Description Default enableEsSvEnable Elasticsearch storage true esUrlElasticsearch endpoint URL template https://127.0.0.1:8081/%s_index/_doc/%s esthreadNumber of worker threads for ES operations 8
Ensure the Elasticsearch URL is accessible from where scan4all runs. Adjust the IP address if running on different hosts.
Elasticsearch Configuration
The main configuration file is config/elasticsearch.yml:
# Cluster settings
cluster.name : my-application
node.name : node-1
# Data and log paths
path.data : /usr/share/elasticsearch/data
path.logs : /usr/share/elasticsearch/logs
# Network settings
network.host : 0.0.0.0
transport.host : 0.0.0.0
network.publish_host : 192.168.0.107
# Ports
http.port : 9200
transport.tcp.port : 9300
# CORS (for web access)
http.cors.enabled : true
http.cors.allow-origin : "*"
http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers : Authorization, X-Requested-With, X-Auth-Token, Content-Type, Content-Length
# Performance settings
http.max_content_length : 400mb
indices.query.bool.max_clause_count : 20000
cluster.routing.allocation.disk.threshold_enabled : false
Key Configuration Options
network.host : Bind address (0.0.0.0 = all interfaces)
network.publish_host : IP address to advertise to other nodes
http.port : REST API port (default: 9200)
transport.tcp.port : Node communication port (default: 9300)
Required for web-based Elasticsearch clients: http.cors.enabled : true
http.cors.allow-origin : "*"
http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers : Authorization, X-Requested-With, X-Auth-Token, Content-Type, Content-Length
Nuclei-Specific Configuration
For Nuclei integration, use config/nuclei_esConfig.yaml:
config/nuclei_esConfig.yaml
elasticsearch :
# IP for elasticsearch instance
ip : 127.0.0.1
# Port is the port of elasticsearch instance
port : 9200
# IndexName is the name of the elasticsearch index
index-name : nuclei_index
# SSL enables ssl for elasticsearch connection
ssl : false
# SSLVerification disables SSL verification for elasticsearch
ssl-verification : false
# Username for the elasticsearch instance
username : elastic
# Password is the password for elasticsearch instance
password : testnmanp
The default password testnmanp should be changed in production environments.
Querying Results
Search by Target
Query results for a specific target:
curl "http://127.0.0.1:9200/nmap_index/_doc/_search?q=_id:192.168.0.111"
Replace 192.168.0.111 with your target IP or hostname.
Search by Field
Query by Port
Query by Vulnerability
Query by Date Range
curl -X GET "http://127.0.0.1:9200/nmap_index/_search" \
-H 'Content-Type: application/json' \
-d '{
"query": {
"match": {
"port": 22
}
}
}'
Advanced Queries
Find All Critical Vulnerabilities
{
"query" : {
"bool" : {
"must" : [
{ "match" : { "severity" : "critical" } },
{ "exists" : { "field" : "vulnerability" } }
]
}
},
"sort" : [
{ "@timestamp" : { "order" : "desc" } }
]
}
{
"size" : 0 ,
"aggs" : {
"by_host" : {
"terms" : {
"field" : "host.keyword" ,
"size" : 100
},
"aggs" : {
"vulnerabilities" : {
"value_count" : {
"field" : "vulnerability.keyword"
}
}
}
}
}
}
Index Management
List All Indices
curl "http://127.0.0.1:9200/_cat/indices?v"
View Index Mapping
curl "http://127.0.0.1:9200/nuclei_index/_mapping?pretty"
Delete Old Results
# Delete all documents older than 30 days
curl -X POST "http://127.0.0.1:9200/nuclei_index/_delete_by_query" \
-H 'Content-Type: application/json' \
-d '{
"query": {
"range": {
"@timestamp": {
"lte": "now-30d"
}
}
}
}'
Create Index Alias
curl -X POST "http://127.0.0.1:9200/_aliases" \
-H 'Content-Type: application/json' \
-d '{
"actions": [
{
"add": {
"index": "nuclei_index",
"alias": "vulnerabilities"
}
}
]
}'
Cluster Configuration
For production deployments with multiple nodes:
# Node 1
cluster.name : scan4all-cluster
node.name : node-1
network.publish_host : 192.168.0.107
discovery.seed_hosts : [ "192.168.0.107:9300" , "192.168.0.108:9300" , "192.168.0.109:9300" ]
cluster.initial_master_nodes : [ "node-1" , "node-2" , "node-3" ]
# Node 2
cluster.name : scan4all-cluster
node.name : node-2
network.publish_host : 192.168.0.108
discovery.seed_hosts : [ "192.168.0.107:9300" , "192.168.0.108:9300" , "192.168.0.109:9300" ]
cluster.initial_master_nodes : [ "node-1" , "node-2" , "node-3" ]
Ensure all nodes can communicate on port 9300 for cluster transport.
Adjust Worker Threads
Increase threads for faster result ingestion:
Too many threads can overwhelm Elasticsearch. Monitor cluster health and adjust accordingly.
JVM Heap Size
Edit config/jvm.options:
# Set heap size (recommended: 50% of available RAM, max 32GB)
-Xms4g
-Xmx4g
Index Settings
Optimize for write performance:
curl -X PUT "http://127.0.0.1:9200/nuclei_index/_settings" \
-H 'Content-Type: application/json' \
-d '{
"index": {
"refresh_interval": "30s",
"number_of_replicas": 0
}
}'
Troubleshooting
Check if Elasticsearch is running: docker ps | grep es
curl http://127.0.0.1:9200
Verify firewall rules allow port 9200.
Increase JVM heap size in config/jvm.options: Restart Elasticsearch after changes.
Enable automatic index cleanup: # Delete indices older than 90 days
curl -X DELETE "http://127.0.0.1:9200/*-$( date -d '90 days ago' +%Y.%m.%d)"
Reduce indices.query.bool.max_clause_count
Add more nodes to cluster
Increase shard count for large indices
Use index aliases and time-based indices
Visualization with Kibana
While not included in scan4all, you can use Kibana for visualization:
docker run -d \
--name kibana \
--link es:elasticsearch \
-p 5601:5601 \
docker.elastic.co/kibana/kibana:7.16.2
Access Kibana at http://localhost:5601 and create dashboards for your scan results.
Security Best Practices
Change Default Password Update the Elasticsearch password in nuclei_esConfig.yaml
Enable SSL/TLS Configure SSL for encrypted communication
Network Isolation Run Elasticsearch on a private network
Access Control Use firewall rules to restrict access to port 9200
Next Steps