The authoritative, auto-generated source list is maintained in the README on GitHub. The tables below highlight notable sources by language. Use
lncrawl sources list for the full runtime list.Feature legend
| Icon | Meaning |
|---|---|
| 🤖 | Machine-translated (MTL) content |
| 🔍 | Supports searching by keyword |
| 🔑 | Requires a login / account |
| 🖼️ | Manga, manhua, or manhwa (image-based) |
Sources by language
- English (en)
- Chinese (zh)
- Japanese (ja)
- Arabic (ar)
- Spanish (es)
- French (fr)
- Indonesian (id)
- Vietnamese (vi)
- Russian (ru)
- Turkish (tr)
- Portuguese (pt)
- Multi-language
English is the largest language group with well over 200 supported URLs. Below is a selection of well-known sites.
| Source | Features |
|---|---|
| novelfull.com | 🔍 |
| novelbin.com | 🔍 |
| webnovel.com | 🔍 |
| wuxiaworld.com | 🔍 🔑 |
| royalroad.com | 🔍 |
| scribblehub.com | 🔍 |
| lightnovelbastion.com | 🔍 |
| lightnovelheaven.com | 🔍 |
| lightnovelreader.me | 🔍 |
| lightnovelsonl.com | 🔍 |
| freewebnovel.com | 🔍 |
| freefullnovel.com | 🔍 |
| allnovel.org | 🔍 |
| allnovelfull.com | 🔍 |
| novelbuddy.io | 🤖 🔍 |
| novelhall.com | 🤖 🔍 |
| lnmtl.com | 🤖 🔑 |
| mltnovels.com | 🤖 🔍 |
| mtlreader.com | 🤖 🔍 |
| babelnovel.com | 🔍 🔑 |
| chrysanthemumgarden.com | 🔑 |
| aquareader.net | 🔍 🖼️ |
| kissmanga.in | 🔍 🖼️ |
| mangarosie.love | 🔍 🖼️ |
| coffeemanga.io | 🔍 🖼️ |
| readmanganato.com | 🖼️ |
| creativenovels.com | — |
| hostednovel.com | — |
| wuxia.blog | — |
| zetrotranslation.com | 🔍 |
Check if a source is supported
Rejected sources
Some sites that previously had crawlers have been rejected — they are tracked in_rejected.json but no longer active. Common rejection reasons include:
- Site is down or domain has expired
- Access denied / Cloudflare blocks all requests
- Platform has been terminated
- Site moved to a different domain (the new domain may be supported instead)
lncrawl sources list unless you explicitly pass --include-rejected. When you try to crawl a rejected URL, lncrawl will show the rejection reason.
Request a new source
You can also contribute a crawler yourself. Copysources/_examples/_01_general_soup.py into the appropriate sources/{lang}/ folder and implement the four required methods. See the CREATING_CRAWLERS guide for step-by-step instructions, or use the CLI to scaffold one with AI assistance: