http://www.editcorp.com/Personal/Lars_Appel/wget/# output to a file wget --output-document <OUTPUT FILE> <URL> # crawl a site'' wget <URL> -r --spider -nd -o output.txt #print to stdout (w/o debugging) wget -qO- <URL> # get page resources -pk ## cookies wget -q --keep-session-cookies --save-cookies cookies.txt "LINK" wget --load-cookies cookies.txt "LINK" spider example wget -r --spider -nd -p --exclude-domains xxxx.com -l 1 http://yyyy.html Downloading an Entire Web Site with wgethttp://www.linuxjournal.com/content/downloading-entire-web-site-wget If you ever need to download an entire Web site, perhaps for off-line viewing, wget can do the $ wget \ --recursive \ --no-clobber \ --page-requisites \ --html-extension \ --convert-links \ --restrict-file-names=windows \ --domains website.org \ --no-parent \ www.website.org/tutorials/html/ This command downloads the Web site www.website.org/tutorials/html/. The options are:
|
Unix >