The --worker flag
The --worker N/M flag tells BTCRecover that this instance should process slice N out of M equal partitions. Every machine in the group runs the same command — only the N value changes.
The --threads flag
By default BTCRecover uses as many threads as there are logical CPU cores. Use --threads to override this, which is particularly useful when running multiple GPU workers:
On Windows, BTCRecover is limited to 64 threads. If your machine has more than 64 logical cores, set
--threads to 64 or below explicitly.Saving and resuming progress
Use--autosave to write periodic checkpoints to a file. If a worker is interrupted, restart it with --restore to pick up where it left off:
scp (Linux/macOS).
Setting up a multi-machine recovery
Extract wallet data
Generate a wallet extract on your local machine so you can share only the minimum required data with remote workers — no private keys or addresses are exposed.The output is a short base64 string you pass to workers via
--data-extract-string.Build and test your tokenlist locally
Run the recovery command on your local machine first to count passwords, catch tokenlist errors, and estimate total runtime before committing to rented hardware.
Provision remote instances
For cloud GPU instances (e.g. Vast.ai), use the The script typically completes within a few minutes of the instance reaching “Successfully Loaded” status.
dceoy/hashcat Docker image and an on-start script to install BTCRecover and its dependencies automatically:Copy required files to each worker
Transfer your tokenlist and any autosave files to every remote instance using WinSCP,
scp, or by pasting directly into nano on the remote terminal.Run each worker with a unique --worker value
Launch the same command on each machine, changing only the Continue the pattern through
--worker N/M argument. Example with 5 workers using the Bitcoin Core JTR kernel:--worker 5/5. The only difference between server commands is the N in --worker N/5.Vast.ai multi-GPU example
Vast.ai is a GPU rental marketplace commonly used for BTCRecover jobs. It is often cheaper than commercial cloud providers and bills by the second.Bitcoin Core: 10 GPUs across 5 instances (~1000x faster than a single CPU)
This example uses 5 instances each with 2x GPUs (10 GPUs total) to recover a Bitcoin Core password via wallet extract:--worker 2/5 through --worker 5/5 on the remaining servers.
Blockchain.com: 2 instances with 10 GPUs each (~100x faster than a single CPU)
This example uses the OpenCL_Brute kernel across 2 instances with 10 GPUs each (20 threads recommended):Multi-GPU performance reference
The table below shows benchmarked performance across common GPU configurations from Vast.ai, useful for estimating cost and time before renting:| GPU configuration | Blockchain.com (kp/s) | Bitcoin Core JTR (kp/s) | Approx. price ($/h) |
|---|---|---|---|
| i7-8750H (CPU reference) | 1 | 0.07 | — |
| 1660 Ti (GPU reference) | 10 | 6.75 | — |
| 1x 1070 | 6.5 | 3.82 | 0.09 |
| 2x 1070 | 12.5 | 6.45 | 0.296 |
| 10x 1070 | 41 | — | — |
| 10x 1080 | 46 | 13.5 | 1.64 |
| 2x 1080 Ti | 10.1 | 6.1 | 0.30 |
| 6x 1080 Ti | 28 | 9.75 | 1.02 |
| 2x 2070 | 16.6 | 12 | 0.48 |
| 10x 2070 Super | 63 | 16 | 1.60 |
| 2x 2080 Ti | 19.5 | 10.8 | 0.40 |
| 4x 2080 Ti | 37 | 16 | 1.00 |
The JTR kernel does not scale linearly beyond approximately 2 GPUs. For large multi-GPU jobs with Bitcoin Core, factor this into your instance selection.
Splitting work via password lists
If you are using a password list rather than a tokenlist, split the list into equal chunks and pass a different chunk to each worker. You can generate candidate passwords withlistpass and then distribute file segments across machines, each running without --worker (since the input file itself is already partitioned).
For large tokenlists, you can also pre-generate a checksummed seed list on one machine and distribute it to workers using the --savevalidseeds, --savevalidseeds-filesize, --multi-file-seedlist, and --skip-worker-checksum arguments to avoid redundant checksum computation on every node.