SuckIT allows you to recursively visit and download a website's content to your disk.
suckit 0.1.0 CLI arguments USAGE: suckit [FLAGS] [OPTIONS] (url) FLAGS: -c, --continue-on-error Flag to enable or disable exit on error -h, --help Prints help information -V, --version Prints version information -v, --verbose Enable more information regarding the scraping process OPTIONS: --delay (delay) Add a delay in seconds between downloads to reduce the likelihood of getting banned [default: 0] -d, --depth (depth) Maximum recursion depth to reach when visiting. -1 is the default and will go as far as it can [default: -1] -e, --exclude (exclude) Regex filter to exclude saving pages that match this expression [default: $^] -i, --include (include) Regex filter to limit to only saving pages that match this expression [default: .*] -j, --jobs (jobs) Maximum number of threads to use concurrently [default: 1] -o, --output (output) Output directory --random-range (random-range) Generate an extra random delay between downloads, from 0 to this number. This is added to the base delay seconds [default: 0] -t, --tries (tries) Maximum amount of retries on download failure [default: 20] -u, --user-agent (user-agent) User agent to be used for sending requests [default: suckit] ARGS: (url) Entry point of the scraping
Generate an embeddable card to be shared on external websites.
Choose your Linux distribution to get detailed installation instructions. If yours is not shown, get more details on the installing snapd documentation.
Is there a problem with SuckIT? Report this app
Thanks for bringing this to our attention. Information you provided will help us investigate further.
There was an error while sending your report. Please try again later.