Skip to content

[Bug]: Always get None PDF and Screenshot #1230

Closed
@Masterain98

Description

@Masterain98

crawl4ai version

0.6.3

Expected Behavior

Get PDF and screenshot when crawling.

Current Behavior

Both result.screenshot and result.pdf are always None

Is this reproducible?

Yes

Inputs Causing the Bug

Sample code (a bit changes from the docs)

import os, asyncio
from base64 import b64decode
from crawl4ai import AsyncWebCrawler, CacheMode

async def main():
    async with AsyncWebCrawler() as crawler:
        result = await crawler.arun(
            url="https://en.wikipedia.org/wiki/List_of_common_misconceptions",
            cache_mode=CacheMode.BYPASS,
            screenshot=True
        )

        if result.success:
            print(f"Size of HTML: {len(result.html)}")
            print(f"Screenshot: {result.screenshot}")
            print(f"PDF: {result.pdf}")
            # Save screenshot
            if result.screenshot:
                print(f"[OK] Screenshot captured, size: {len(result.screenshot)} bytes")
                with open("wikipedia_screenshot.png", "wb") as f:
                    f.write(b64decode(result.screenshot))

            # Save PDF
            if result.pdf:
                print(f"[OK] PDF captured, size: {len(result.pdf)} bytes")
                with open("wikipedia_page.pdf", "wb") as f:
                    f.write(result.pdf)

        else:
            print("[ERROR]", result.error_message)

if __name__ == "__main__":
    asyncio.run(main())

Output:

[INIT].... → Crawl4AI 0.6.3 
[FETCH]... ↓ https://en.wikipedia.org/wiki/List_of_common_misconceptions       
| ✓ | ⏱: 0.66s 
[SCRAPE].. ◆ https://en.wikipedia.org/wiki/List_of_common_misconceptions       
| ✓ | ⏱: 0.21s 
[COMPLETE] ● https://en.wikipedia.org/wiki/List_of_common_misconceptions       
| ✓ | ⏱: 0.88s 
Size of HTML: 629543
Screenshot: None
PDF: None

Process finished with exit code 0

Steps to Reproduce

I got same issue in both my home and office computer (Windows), with fresh installation of Crawl4AI

  • pip install crawl4ai
  • crawl4ai-setup
(.venv) PS F:\i\Documents\GitHub\crypto-card-crawler> crawl4ai-setup                                                                                
[INIT].... → Running post-installation setup... 
[INIT].... → Installing Playwright browsers...                                                                                                      
Downloading Chromium 136.0.7103.25 (playwright build v1169) from https://cdn.playwright.dev/dbazure/download/playwright/builds/chromium/1169/chromium-win64.zip
144.4 MiB [====================] 100% 0.0s
Chromium 136.0.7103.25 (playwright build v1169) downloaded to C:\Users\i\AppData\Local\ms-playwright\chromium-1169
Downloading FFMPEG playwright build v1011 from https://cdn.playwright.dev/dbazure/download/playwright/builds/ffmpeg/1011/ffmpeg-win64.zip
1.3 MiB [====================] 100% 0.0s
FFMPEG playwright build v1011 downloaded to C:\Users\i\AppData\Local\ms-playwright\ffmpeg-1011
Downloading Chromium Headless Shell 136.0.7103.25 (playwright build v1169) from https://cdn.playwright.dev/dbazure/download/playwright/builds/chromium/1169/chromium-headless-shell-win64.zip
89.1 MiB [====================] 100% 0.0s
Chromium Headless Shell 136.0.7103.25 (playwright build v1169) downloaded to C:\Users\i\AppData\Local\ms-playwright\chromium_headless_shell-1169
Downloading Winldd playwright build v1007 from https://cdn.playwright.dev/dbazure/download/playwright/builds/winldd/1007/winldd-win64.zip
0.1 MiB [====================] 100% 0.0s
Winldd playwright build v1007 downloaded to C:\Users\i\AppData\Local\ms-playwright\winldd-1007
[COMPLETE] ● Playwright installation completed successfully. 
[INIT].... → Starting database initialization...                                                                                                    
[COMPLETE] ● Database backup created at: C:\Users\i\.crawl4ai\crawl4ai.db.backup_20250618_004217 
[INIT].... → Starting database migration...                                                                                                         
[COMPLETE] ● Migration completed. 0 records processed.                                                                                              
[COMPLETE] ● Database initialization completed successfully.                                                                                        
[COMPLETE] ● Post-installation setup completed!                                                                                                     
(.venv) PS F:\i\Documents\GitHub\crypto-card-crawler> 

- Run demo code above
- I also tried to set Firefox as the broswer, but still get None

Code snippets

OS

Windows

Python version

3.13

Browser

Chrome

Browser version

137.0.7151.105

Error logs & Screenshots (if applicable)

No response

Metadata

Metadata

Assignees

Labels

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions