Learn how to scrape web content with nodriver using text search and CSS selectors
This example demonstrates the fundamentals of web scraping with nodriver, including finding elements, extracting data, and handling interactive content.
# Find by text (waits up to 10 seconds by default)element = await tab.find("login")# Find best match by text lengthelement = await tab.find("login", best_match=True)
# Click an elementawait element.click()# Send text inputawait element.send_keys("your text here")# Clear input fieldawait element.clear_input()# Scroll element into viewawait element.scroll_into_view()
# Navigate to URLawait tab.get("https://example.com")# Navigate backawait tab.back()# Scroll the pageawait tab.scroll_down(100)await tab.scroll_up(50)
The await tab statement updates all references and allows the script to “breathe”, which is useful when the script runs faster than the browser can render.
You can monitor all network requests and responses:
async def send_handler(event: cdp.network.RequestWillBeSent): request = event.request print(f"{request.method} {request.url}") for key, value in request.headers.items(): print(f" {key}: {value}")async def receive_handler(event: cdp.network.ResponseReceived): response = event.response print(f"Received: {response.url} - Status: {response.status}")# Attach handlers before navigationtab.add_handler(cdp.network.RequestWillBeSent, send_handler)tab.add_handler(cdp.network.ResponseReceived, receive_handler)
Always use await tab.find() or await tab.select() instead of hardcoded sleeps. These methods automatically retry for up to 10 seconds, making your scripts more robust.