Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

sindresorhus/capture-website-cli

Repository files navigation

capture-website-cli

Capture screenshots of websites from the command-line

It uses Puppeteer (Chrome) under the hood.

Install

npm install --global capture-website-cli

Note to Linux users: If you get a "No usable sandbox!" error, you need to enable system sandboxing.

Note to Apple silicon users: If you get a "spawn Unknown system error" error, try installing Rosetta by running softwareupdate --install-rosetta.

Usage

$ capture-website --help
 Usage
 $ capture-website <url|file>
 $ echo "<h1>Unicorn</h1>" | capture-website
 Options
 --output Image file path (writes it to stdout if omitted)
 --auto-output Automatically generate output filename from URL/input
 --width Page width [default: 1280]
 --height Page height [default: 800]
 --type Image type: png|jpeg|webp [default: png]
 --quality Image quality: 0...1 (Only for JPEG and WebP) [default: 1]
 --scale-factor Scale the webpage `n` times [default: 2]
 --list-devices Output a list of supported devices to emulate
 --emulate-device Capture as if it were captured on the given device
 --full-page Capture the full scrollable page, not just the viewport
 --no-default-background Make the default background transparent
 --timeout Seconds before giving up trying to load the page. Specify `0` to disable. [default: 60]
 --delay Seconds to wait after the page finished loading before capturing the screenshot [default: 0]
 --wait-for-element Wait for a DOM element matching the CSS selector to appear in the page and to be visible before capturing the screenshot
 --element Capture the DOM element matching the CSS selector. It will wait for the element to appear in the page and to be visible.
 --hide-elements Hide DOM elements matching the CSS selector (Can be set multiple times)
 --remove-elements Remove DOM elements matching the CSS selector (Can be set multiple times)
 --click-element Click the DOM element matching the CSS selector
 --scroll-to-element Scroll to the DOM element matching the CSS selector
 --disable-animations Disable CSS animations and transitions [default: false]
 --no-javascript Disable JavaScript execution (does not affect --module/--script)
 --module Inject a JavaScript module into the page. Can be inline code, absolute URL, and local file path with `.js` extension. (Can be set multiple times)
 --script Same as `--module`, but instead injects the code as a classic script
 --style Inject CSS styles into the page. Can be inline code, absolute URL, and local file path with `.css` extension. (Can be set multiple times)
 --header Set a custom HTTP header (Can be set multiple times)
 --user-agent Set the user agent
 --cookie Set a cookie (Can be set multiple times)
 --authentication Credentials for HTTP authentication
 --debug Show the browser window to see what it's doing
 --dark-mode Emulate preference of dark color scheme
 --local-storage Set localStorage items before the page loads (Can be set multiple times)
 --launch-options Puppeteer launch options as JSON
 --overwrite Overwrite the destination file if it exists
 --inset Inset the screenshot relative to the viewport or \`--element\`. Accepts a number or four comma-separated numbers for top, right, bottom, and left.
 --clip Position and size in the website (clipping region). Accepts comma-separated numbers for x, y, width, and height.
 --no-block-ads Disable ad blocking
 --allow-cors Allow cross-origin requests (useful for local HTML files)
 --wait-for-network-idle Wait for network connections to finish
 --insecure Accept self-signed and invalid SSL certificates
 Examples
 $ capture-website https://sindresorhus.com --output=screenshot.png
 $ capture-website https://sindresorhus.com --auto-output
 $ capture-website index.html --output=screenshot.png
 $ echo "<h1>Unicorn</h1>" | capture-website --output=screenshot.png
 $ capture-website https://sindresorhus.com | open -f -a Preview
 Flag examples
 --width=1000
 --height=600
 --type=jpeg
 --quality=0.5
 --scale-factor=3
 --emulate-device="iPhone X"
 --timeout=80
 --delay=10
 --wait-for-element="#header"
 --element=".main-content"
 --hide-elements=".sidebar"
 --remove-elements="img.ad"
 --click-element="button"
 --scroll-to-element="#map"
 --disable-animations
 --no-javascript
 --module=https://sindresorhus.com/remote-file.js
 --module=local-file.js
 --module="document.body.style.backgroundColor = 'red'"
 --header="x-powered-by: capture-website-cli"
 --user-agent="I love unicorns"
 --cookie="id=unicorn; Expires=2018年10月21日 07:28:00 GMT;"
 --authentication="username:password"
 --launch-options='{"headless": false}'
 --dark-mode
 --local-storage="theme=dark"
 --inset=10,15,-10,15
 --inset=30
 --clip=10,30,300,1024
 --no-block-ads
 --allow-cors
 --wait-for-network-idle
 --insecure
 --auto-output

FAQ

More here.

How do I capture websites with self-signed certificates?

Use the --insecure flag to bypass certificate validation.

I'm getting connection errors (ECONNREFUSED), what can I do?

Network connectivity issues can occur due to:

  • Slow networks: Increase timeout with --timeout=60 (or higher)
  • Corporate firewalls: May block network requests
  • VPN/proxy issues: Try disabling VPN or configuring proxy settings
  • IPv6 issues: Some networks have IPv6 connectivity problems

Try testing with a simple site first: capture-website https://example.com --output=test.png

How does --auto-output work?

It automatically generates filenames based on the input:

  • URLs: example.com.png
  • Files: index.png (from index.html)
  • Stdin: screenshot.png

If a file already exists, it increments: example.com (1).png, example.com (2).png, etc.

How can I capture websites from a file with URLs?

Let's say you have a file named urls.txt with:

https://sindresorhus.com
https://github.com

You can run this:

# With auto-output (simpler)
while read url; do
 capture-website "$url" --auto-output
done < urls.txt
# Or with custom naming
while read url; do
 capture-website "$url" --output "screenshot-$(echo "$url" | sed -e 's/[^A-Za-z0-9._-]//g').png"
done < urls.txt

Related

Sponsor this project

Packages

No packages published

Contributors 9

AltStyle によって変換されたページ (->オリジナル) /