Skip to main content
When a recovery job is too large to finish in a reasonable time on a single machine, BTCRecover can divide the search space and distribute it across multiple computers or cloud GPU instances. Each machine handles an independent, non-overlapping slice of the total work.

The --worker flag

The --worker N/M flag tells BTCRecover that this instance should process slice N out of M equal partitions. Every machine in the group runs the same command — only the N value changes.
# Machine 1 of 3
python3 btcrecover.py --wallet ./wallet.dat --tokenlist tokens.txt --worker 1/3

# Machine 2 of 3
python3 btcrecover.py --wallet ./wallet.dat --tokenlist tokens.txt --worker 2/3

# Machine 3 of 3
python3 btcrecover.py --wallet ./wallet.dat --tokenlist tokens.txt --worker 3/3
The partitioning is deterministic — you can safely stop and restart any individual worker without affecting the others.

The --threads flag

By default BTCRecover uses as many threads as there are logical CPU cores. Use --threads to override this, which is particularly useful when running multiple GPU workers:
python3 btcrecover.py --wallet ./wallet.dat --enable-opencl --threads 20
On Windows, BTCRecover is limited to 64 threads. If your machine has more than 64 logical cores, set --threads to 64 or below explicitly.
For the OpenCL_Brute kernel, a good starting point is 2 threads per GPU. BTCRecover assigns GPUs to threads in a round-robin, so having at least as many threads as GPUs ensures every GPU is utilised.

Saving and resuming progress

Use --autosave to write periodic checkpoints to a file. If a worker is interrupted, restart it with --restore to pick up where it left off:
# Start with autosave
python3 btcrecover.py --wallet ./wallet.dat --tokenlist tokens.txt \
  --worker 2/5 --autosave autosave_worker2.file

# Resume after interruption
python3 btcrecover.py --restore autosave_worker2.file
Autosave files are binary — copy them between machines with a file transfer tool such as WinSCP (Windows) or scp (Linux/macOS).

Setting up a multi-machine recovery

1

Extract wallet data

Generate a wallet extract on your local machine so you can share only the minimum required data with remote workers — no private keys or addresses are exposed.
# Bitcoin Core
python3 extract-bitcoincore-mkey.py ./wallet.dat

# Blockchain.com
python3 extract-blockchain-main-data.py ./wallet.aes.json
The output is a short base64 string you pass to workers via --data-extract-string.
2

Build and test your tokenlist locally

Run the recovery command on your local machine first to count passwords, catch tokenlist errors, and estimate total runtime before committing to rented hardware.
python3 btcrecover.py --data-extract-string <base64_string> \
  --tokenlist tokens.txt
3

Provision remote instances

For cloud GPU instances (e.g. Vast.ai), use the dceoy/hashcat Docker image and an on-start script to install BTCRecover and its dependencies automatically:
apt update
apt install python3 python3-pip python3-dev python3-pyopencl nano mc git python3-bsddb3 -y
apt install libssl-dev build-essential automake pkg-config libtool libffi-dev libgmp-dev libyaml-cpp-dev libsecp256k1-dev -y
git clone https://github.com/3rdIteration/btcrecover.git
pip3 install -r ~/btcrecover/requirements-full.txt
update-locale LANG=C.UTF-8
The script typically completes within a few minutes of the instance reaching “Successfully Loaded” status.
4

Copy required files to each worker

Transfer your tokenlist and any autosave files to every remote instance using WinSCP, scp, or by pasting directly into nano on the remote terminal.
5

Run each worker with a unique --worker value

Launch the same command on each machine, changing only the --worker N/M argument. Example with 5 workers using the Bitcoin Core JTR kernel:
# Server 1
python3 btcrecover.py --data-extract-string <base64_string> \
  --tokenlist tokens.txt --dsw --no-eta --no-dupchecks \
  --enable-gpu --global-ws 4096 --local-ws 256 \
  --autosave autosave.file --worker 1/5

# Server 2
python3 btcrecover.py --data-extract-string <base64_string> \
  --tokenlist tokens.txt --dsw --no-eta --no-dupchecks \
  --enable-gpu --global-ws 4096 --local-ws 256 \
  --autosave autosave.file --worker 2/5
Continue the pattern through --worker 5/5. The only difference between server commands is the N in --worker N/5.
6

Collect the result and destroy instances

Whichever worker finds the password will print it. Once found, stop all remaining workers and, for cloud instances, destroy them to stop billing.

Vast.ai multi-GPU example

Vast.ai is a GPU rental marketplace commonly used for BTCRecover jobs. It is often cheaper than commercial cloud providers and bills by the second.

Bitcoin Core: 10 GPUs across 5 instances (~1000x faster than a single CPU)

This example uses 5 instances each with 2x GPUs (10 GPUs total) to recover a Bitcoin Core password via wallet extract:
# Server 1 (2x 2080 Ti, --worker 1/5)
python3 btcrecover.py \
  --data-extract-string YmM65iRhIMReOQ2qaldHbn++T1fYP3nXX5tMHbaA/lqEbLhFk6/1Y5F5x0QJAQBI/maR \
  --tokenlist tokens.txt --dsw --no-eta --no-dupchecks \
  --enable-gpu --global-ws 4096 --local-ws 256 \
  --autosave autosave.file --worker 1/5
Repeat with --worker 2/5 through --worker 5/5 on the remaining servers.

Blockchain.com: 2 instances with 10 GPUs each (~100x faster than a single CPU)

This example uses the OpenCL_Brute kernel across 2 instances with 10 GPUs each (20 threads recommended):
# Server 1 (10x 1080, --worker 1/2)
python3 btcrecover.py \
  --data-extract-string Yms6A6G5G+a+Q2Sm8GwZcojLJOJFk2tMKKhzmgjn28BZuE6IEwAA2s7F2Q== \
  --tokenlist tokenlist.txt --dsw --no-eta --no-dupchecks \
  --enable-opencl --threads 20 \
  --autosave autosave.file --worker 1/2

# Server 2 (10x 1080, --worker 2/2)
python3 btcrecover.py \
  --data-extract-string Yms6A6G5G+a+Q2Sm8GwZcojLJOJFk2tMKKhzmgjn28BZuE6IEwAA2s7F2Q== \
  --tokenlist tokenlist.txt --dsw --no-eta --no-dupchecks \
  --enable-opencl --threads 20 \
  --autosave autosave.file --worker 2/2

Multi-GPU performance reference

The table below shows benchmarked performance across common GPU configurations from Vast.ai, useful for estimating cost and time before renting:
GPU configurationBlockchain.com (kp/s)Bitcoin Core JTR (kp/s)Approx. price ($/h)
i7-8750H (CPU reference)10.07
1660 Ti (GPU reference)106.75
1x 10706.53.820.09
2x 107012.56.450.296
10x 107041
10x 10804613.51.64
2x 1080 Ti10.16.10.30
6x 1080 Ti289.751.02
2x 207016.6120.48
10x 2070 Super63161.60
2x 2080 Ti19.510.80.40
4x 2080 Ti37161.00
The JTR kernel does not scale linearly beyond approximately 2 GPUs. For large multi-GPU jobs with Bitcoin Core, factor this into your instance selection.

Splitting work via password lists

If you are using a password list rather than a tokenlist, split the list into equal chunks and pass a different chunk to each worker. You can generate candidate passwords with listpass and then distribute file segments across machines, each running without --worker (since the input file itself is already partitioned). For large tokenlists, you can also pre-generate a checksummed seed list on one machine and distribute it to workers using the --savevalidseeds, --savevalidseeds-filesize, --multi-file-seedlist, and --skip-worker-checksum arguments to avoid redundant checksum computation on every node.

Build docs developers (and LLMs) love