WGET(1) GNU Wget WGET(1)
NAME
wget - GNU Wget Manual
SYNOPSIS
wget [option]... [URL]...
DESCRIPTION
GNU Wget is a free utility for non-interactive download of
files from the Web. It supports HTTP, HTTPS, and FTP pro-
tocols, as well as retrieval through HTTP proxies.
Wget is non-interactive, meaning that it can work in the
background, while the user is not logged on. This allows
you to start a retrieval and disconnect from the system,
letting Wget finish the work. By contrast, most of the
Web browsers require constant user's presence, which can
be a great hindrance when transferring a lot of data.
Wget can follow links in HTML pages and create local ver-
sions of remote web sites, fully recreating the directory
structure of the original site. This is sometimes
referred to as ``recursive downloading.'' While doing
that, Wget respects the Robot Exclusion Standard
(/robots.txt). Wget can be instructed to convert the
links in downloaded HTML files to the local files for
offline viewing.
Wget has been designed for robustness over slow or unsta-
ble network connections; if a download fails due to a net-
work problem, it will keep retrying until the whole file
has been retrieved. If the server supports regetting, it
will instruct the server to continue the download from
where it left off.
OPTIONS
Basic Startup Options
-V
--version
Display the version of Wget.
-h
--help
Print a help message describing all of Wget's command-
line options.
-b
--background
Go to background immediately after startup. If no
output file is specified via the -o, output is redi-
rected to wget-log.
-e command
--execute command
Execute command as if it were a part of .wgetrc. A
command thus invoked will be executed after the com-
mands in .wgetrc, thus taking precedence over them.
Logging and Input File Options
-o logfile
--output-file=logfile
Log all messages to logfile. The messages are nor-
mally reported to standard error.
-a logfile
--append-output=logfile
Append to logfile. This is the same as -o, only it
appends to logfile instead of overwriting the old log
file. If logfile does not exist, a new file is cre-
ated.
-d
--debug
Turn on debug output, meaning various information
important to the developers of Wget if it does not
work properly. Your system administrator may have
chosen to compile Wget without debug support, in which
case -d will not work. Please note that compiling
with debug support is always safe---Wget compiled with
the debug support will not print any debug info unless
requested with -d.
-q
--quiet
Turn off Wget's output.
-v
--verbose
Turn on verbose output, with all the available data.
The default output is verbose.
-nv
--non-verbose
Non-verbose output---turn off verbose without being
completely quiet (use -q for that), which means that
error messages and basic information still get
printed.
-i file
--input-file=file
Read URLs from file, in which case no URLs need to be
on the command line. If there are URLs both on the
command line and in an input file, those on the com-
mand lines will be the first ones to be retrieved.
The file need not be an HTML document (but no harm if
it is)---it is enough if the URLs are just listed
sequentially.
However, if you specify --force-html, the document
will be regarded as html. In that case you may have
problems with relative links, which you can solve
either by adding `' to the documents
or by specifying --base=url on the command line.
-F
--force-html
When input is read from a file, force it to be treated
as an HTML file. This enables you to retrieve rela-
tive links from existing HTML files on your local
disk, by adding `' to HTML, or using
the --base command-line option.
-B URL
--base=URL
When used in conjunction with -F, prepends URL to rel-
ative links in the file specified by -i.
Download Options
--bind-address=ADDRESS
When making client TCP/IP connections, `bind()' to
ADDRESS on the local machine. ADDRESS may be speci-
fied as a hostname or IP address. This option can be
useful if your machine is bound to multiple IPs.
-t number
--tries=number
Set number of retries to number. Specify 0 or inf for
infinite retrying.
-O file
--output-document=file
The documents will not be written to the appropriate
files, but all will be concatenated together and writ-
ten to file. If file already exists, it will be over-
written. If the file is -, the documents will be
written to standard output. Including this option
automatically sets the number of tries to 1.
-nc
--no-clobber
If a file is downloaded more than once in the same
directory, Wget's behavior depends on a few options,
including -nc. In certain cases, the local file will
be clobbered, or overwritten, upon repeated download.
In other cases it will be preserved.
When running Wget without -N, -nc, or -r, downloading
the same file in the same directory will result in the
original copy of file being preserved and the second
copy being named file.1. If that file is downloaded
yet again, the third copy will be named file.2, and so
on. When -nc is specified, this behavior is sup-
pressed, and Wget will refuse to download newer copies
of file. Therefore, ```no-clobber''' is actually a
misnomer in this mode---it's not clobbering that's
prevented (as the numeric suffixes were already pre-
venting clobbering), but rather the multiple version
saving that's prevented.
When running Wget with -r, but without -N or -nc, re-
downloading a file will result in the new copy simply
overwriting the old. Adding -nc will prevent this
behavior, instead causing the original version to be
preserved and any newer copies on the server to be
ignored.
When running Wget with -N, with or without -r, the
decision as to whether or not to download a newer copy
of a file depends on the local and remote timestamp
and size of the file. -nc may not be specified at the
same time as -N.
Note that when -nc is specified, files with the suf-
fixes .html or (yuck) .htm will be loaded from the
local disk and parsed as if they had been retrieved
from the Web.
-c
--continue
Continue getting a partially-downloaded file. This is
useful when you want to finish up a download started
by a previous instance of Wget, or by another program.
For instance:
wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
If there is a file named ls-lR.Z in the current direc-
tory, Wget will assume that it is the first portion of
the remote file, and will ask the server to continue
the retrieval from an offset equal to the length of
the local file.
Note that you don't need to specify this option if you
just want the current invocation of Wget to retry
downloading a file should the connection be lost mid-
way through. This is the default behavior. -c only
affects resumption of downloads started prior to this
invocation of Wget, and whose local files are still
sitting around.
Without -c, the previous example would just download
the remote file to ls-lR.Z.1, leaving the truncated
ls-lR.Z file alone.
Beginning with Wget 1.7, if you use -c on a non-empty
file, and it turns out that the server does not sup-
port continued downloading, Wget will refuse to start
the download from scratch, which would effectively
ruin existing contents. If you really want the down-
load to start from scratch, remove the file.
Also beginning with Wget 1.7, if you use -c on a file
which is of equal size as the one on the server, Wget
will refuse to download the file and print an explana-
tory message. The same happens when the file is
smaller on the server than locally (presumably because
it was changed on the server since your last download
attempt)---because ``continuing'' is not meaningful,
no download occurs.
On the other side of the coin, while using -c, any
file that's bigger on the server than locally will be
considered an incomplete download and only
`(length(remote) - length(local))' bytes will be down-
loaded and tacked onto the end of the local file.
This behavior can be desirable in certain cases---for
instance, you can use wget -c to download just the new
portion that's been appended to a data collection or
log file.
However, if the file is bigger on the server because
it's been changed, as opposed to just appended to,
you'll end up with a garbled file. Wget has no way of
verifying that the local file is really a valid prefix
of the remote file. You need to be especially careful
of this when using -c in conjunction with -r, since
every file will be considered as an "incomplete down-
load" candidate.
Another instance where you'll get a garbled file if
you try to use -c is if you have a lame HTTP proxy
that inserts a ``transfer interrupted'' string into
the local file. In the future a ``rollback'' option
may be added to deal with this case.
Note that -c only works with FTP servers and with HTTP
servers that support the `Range' header.
--progress=type
Select the type of the progress indicator you wish to
use. Legal indicators are ``dot'' and ``bar''.
The ``bar'' indicator is used by default. It draws an
ASCII progress bar graphics (a.k.a ``thermometer''
display) indicating the status of retrieval. If the
output is not a TTY, the ``dot'' bar will be used by
default.
Use --progress=dot to switch to the ``dot'' display.
It traces the retrieval by printing dots on the
screen, each dot representing a fixed amount of down-
loaded data.
When using the dotted retrieval, you may also set the
style by specifying the type as dot:style. Different
styles assign different meaning to one dot. With the
`default' style each dot represents 1K, there are ten
dots in a cluster and 50 dots in a line. The `binary'
style has a more ``computer''-like orientation---8K
dots, 16-dots clusters and 48 dots per line (which
makes for 384K lines). The `mega' style is suitable
for downloading very large files---each dot represents
64K retrieved, there are eight dots in a cluster, and
48 dots on each line (so each line contains 3M).
Note that you can set the default style using the
`progress' command in .wgetrc. That setting may be
overridden from the command line. The exception is
that, when the output is not a TTY, the ``dot''
progress will be favored over ``bar''. To force the
bar output, use --progress=bar:force.
-N
--timestamping
Turn on time-stamping.
-S
--server-response
Print the headers sent by HTTP servers and responses
sent by FTP servers.
--spider
When invoked with this option, Wget will behave as a
Web spider, which means that it will not download the
pages, just check that they are there. You can use it
to check your bookmarks, e.g. with:
wget --spider --force-html -i bookmarks.html
This feature needs much more work for Wget to get
close to the functionality of real WWW spiders.
-T seconds
--timeout=seconds
Set the read timeout to seconds seconds. Whenever a
network read is issued, the file descriptor is checked
for a timeout, which could otherwise leave a pending
connection (uninterrupted read). The default timeout
is 900 seconds (fifteen minutes). Setting timeout to
0 will disable checking for timeouts.
Please do not lower the default timeout value with
this option unless you know what you are doing.
--limit-rate=amount
Limit the download speed to amount bytes per second.
Amount may be expressed in bytes, kilobytes with the k
suffix, or megabytes with the m suffix. For example,
--limit-rate=20k will limit the retrieval rate to
20KB/s. This kind of thing is useful when, for what-
ever reason, you don't want Wget to consume the entire
evailable bandwidth.
Note that Wget implementeds the limiting by sleeping
the appropriate amount of time after a network read
that took less time than specified by the rate. Even-
tually this strategy causes the TCP transfer to slow
down to approximately the specified rate. However, it
takes some time for this balance to be achieved, so
don't be surprised if limiting the rate doesn't work
with very small files. Also, the "sleeping" strategy
will misfire when an extremely small bandwidth, say
less than 1.5KB/s, is specified.
-w seconds
--wait=seconds
Wait the specified number of seconds between the
retrievals. Use of this option is recommended, as it
lightens the server load by making the requests less
frequent. Instead of in seconds, the time can be
specified in minutes using the `m' suffix, in hours
using `h' suffix, or in days using `d' suffix.
Specifying a large value for this option is useful if
the network or the destination host is down, so that
Wget can wait long enough to reasonably expect the
network error to be fixed before the retry.
--waitretry=seconds
If you don't want Wget to wait between every
retrieval, but only between retries of failed down-
loads, you can use this option. Wget will use linear
backoff, waiting 1 second after the first failure on a
given file, then waiting 2 seconds after the second
failure on that file, up to the maximum number of sec-
onds you specify. Therefore, a value of 10 will actu-
ally make Wget wait up to (1 + 2 + ... + 10) = 55 sec-
onds per file.
Note that this option is turned on by default in the
global wgetrc file.
--random-wait
Some web sites may perform log analysis to identify
retrieval programs such as Wget by looking for statis-
tically significant similarities in the time between
requests. This option causes the time between requests
to vary between 0 and 2 * wait seconds, where wait was
specified using the -w or --wait options, in order to
mask Wget's presence from such analysis.
A recent article in a publication devoted to develop-
ment on a popular consumer platform provided code to
perform this analysis on the fly. Its author sug-
gested blocking at the class C address level to ensure
automated retrieval programs were blocked despite
changing DHCP-supplied addresses.
The --random-wait option was inspired by this ill-
advised recommendation to block many unrelated users
from a web site due to the actions of one.
-Y on/off
--proxy=on/off
Turn proxy support on or off. The proxy is on by
default if the appropriate environmental variable is
defined.
-Q quota
--quota=quota
Specify download quota for automatic retrievals. The
value can be specified in bytes (default), kilobytes
(with k suffix), or megabytes (with m suffix).
Note that quota will never affect downloading a single
file. So if you specify wget -Q10k
ftp://wuarchive.wustl.edu/ls-lR.gz, all of the ls-
lR.gz will be downloaded. The same goes even when
several URLs are specified on the command-line. How-
ever, quota is respected when retrieving either recur-
sively, or from an input file. Thus you may safely
type wget -Q2m -i sites---download will be aborted
when the quota is exceeded.
Setting quota to 0 or to inf unlimits the download
quota.
Directory Options
-nd
--no-directories
Do not create a hierarchy of directories when retriev-
ing recursively. With this option turned on, all
files will get saved to the current directory, without
clobbering (if a name shows up more than once, the
filenames will get extensions .n).
-x
--force-directories
The opposite of -nd---create a hierarchy of directo-
ries, even if one would not have been created other-
wise. E.g. wget -x http://fly.srk.fer.hr/robots.txt
will save the downloaded file to
fly.srk.fer.hr/robots.txt.
-nH
--no-host-directories
Disable generation of host-prefixed directories. By
default, invoking Wget with -r http://fly.srk.fer.hr/
will create a structure of directories beginning with
fly.srk.fer.hr/. This option disables such behavior.
--cut-dirs=number
Ignore number directory components. This is useful
for getting a fine-grained control over the directory
where recursive retrieval will be saved.
Take, for example, the directory at
ftp://ftp.xemacs.org/pub/xemacs/. If you retrieve it
with -r, it will be saved locally under
ftp.xemacs.org/pub/xemacs/. While the -nH option can
remove the ftp.xemacs.org/ part, you are still stuck
with pub/xemacs. This is where --cut-dirs comes in
handy; it makes Wget not ``see'' number remote direc-
tory components. Here are several examples of how
--cut-dirs option works.
No options -> ftp.xemacs.org/pub/xemacs/
-nH -> pub/xemacs/
-nH --cut-dirs=1 -> xemacs/
-nH --cut-dirs=2 -> .
--cut-dirs=1 -> ftp.xemacs.org/xemacs/
...
If you just want to get rid of the directory struc-
ture, this option is similar to a combination of -nd
and -P. However, unlike -nd, --cut-dirs does not lose
with subdirectories---for instance, with -nH --cut-
dirs=1, a beta/ subdirectory will be placed to
xemacs/beta, as one would expect.
-P prefix
--directory-prefix=prefix
Set directory prefix to prefix. The directory prefix
is the directory where all other files and subdirecto-
ries will be saved to, i.e. the top of the retrieval
tree. The default is . (the current directory).
HTTP Options
-E
--html-extension
If a file of type text/html is downloaded and the URL
does not end with the regexp \.[Hh][Tt][Mm][Ll]?, this
option will cause the suffix .html to be appended to
the local filename. This is useful, for instance,
when you're mirroring a remote site that uses .asp
pages, but you want the mirrored pages to be viewable
on your stock Apache server. Another good use for
this is when you're downloading the output of CGIs. A
URL like http://site.com/article.cgi?25 will be saved
as article.cgi?25.html.
Note that filenames changed in this way will be re-
downloaded every time you re-mirror a site, because
Wget can't tell that the local X.html file corresponds
to remote URL X (since it doesn't yet know that the
URL produces output of type text/html. To prevent
this re-downloading, you must use -k and -K so that
the original version of the file will be saved as
X.orig.
--http-user=user
--http-passwd=password
Specify the username user and password password on an
HTTP server. According to the type of the challenge,
Wget will encode them using either the `basic' (inse-
cure) or the `digest' authentication scheme.
Another way to specify username and password is in the
URL itself. Either method reveals your password to
anyone who bothers to run `ps'. To prevent the pass-
words from being seen, store them in .wgetrc or
.netrc, and make sure to protect those files from
other users with `chmod'. If the passwords are really
important, do not leave them lying in those files
either---edit the files and delete them after Wget has
started the download.
For more information about security issues with Wget,
-C on/off
--cache=on/off
When set to off, disable server-side cache. In this
case, Wget will send the remote server an appropriate
directive (Pragma: no-cache) to get the file from the
remote service, rather than returning the cached ver-
sion. This is especially useful for retrieving and
flushing out-of-date documents on proxy servers.
Caching is allowed by default.
--cookies=on/off
When set to off, disable the use of cookies. Cookies
are a mechanism for maintaining server-side state.
The server sends the client a cookie using the
`Set-Cookie' header, and the client responds with the
same cookie upon further requests. Since cookies
allow the server owners to keep track of visitors and
for sites to exchange this information, some consider
them a breach of privacy. The default is to use cook-
ies; however, storing cookies is not on by default.
--load-cookies file
Load cookies from file before the first HTTP
retrieval. file is a textual file in the format orig-
inally used by Netscape's cookies.txt file.
You will typically use this option when mirroring
sites that require that you be logged in to access
some or all of their content. The login process typi-
cally works by the web server issuing an HTTP cookie
upon receiving and verifying your credentials. The
cookie is then resent by the browser when accessing
that part of the site, and so proves your identity.
Mirroring such a site requires Wget to send the same
cookies your browser sends when communicating with the
site. This is achieved by --load-cookies---simply
point Wget to the location of the cookies.txt file,
and it will send the same cookies your browser would
send in the same situation. Different browsers keep
textual cookie files in different locations:
Netscape 4.x.
The cookies are in ~/.netscape/cookies.txt.
Mozilla and Netscape 6.x.
Mozilla's cookie file is also named cookies.txt,
located somewhere under ~/.mozilla, in the direc-
tory of your profile. The full path usually ends
up looking somewhat like ~/.mozilla/default/some-
weird-string/cookies.txt.
Internet Explorer.
You can produce a cookie file Wget can use by
using the File menu, Import and Export, Export
Cookies. This has been tested with Internet
Explorer 5; it is not guaranteed to work with ear-
lier versions.
Other browsers.
If you are using a different browser to create
your cookies, --load-cookies will only work if you
can locate or produce a cookie file in the
Netscape format that Wget expects.
If you cannot use --load-cookies, there might still be
an alternative. If your browser supports a ``cookie
manager'', you can use it to view the cookies used
when accessing the site you're mirroring. Write down
the name and value of the cookie, and manually
instruct Wget to send those cookies, bypassing the
``official'' cookie support:
wget --cookies=off --header "Cookie: I=I"
--save-cookies file
Save cookies from file at the end of session. Cookies
whose expiry time is not specified, or those that have
already expired, are not saved.
--ignore-length
Unfortunately, some HTTP servers (CGI programs, to be
more precise) send out bogus `Content-Length' headers,
which makes Wget go wild, as it thinks not all the
document was retrieved. You can spot this syndrome if
Wget retries getting the same document again and
again, each time claiming that the (otherwise normal)
connection has closed on the very same byte.
With this option, Wget will ignore the `Con-
tent-Length' header---as if it never existed.
--header=additional-header
Define an additional-header to be passed to the HTTP
servers. Headers must contain a : preceded by one or
more non-blank characters, and must not contain new-
lines.
You may define more than one additional header by
specifying --header more than once.
wget --header='Accept-Charset: iso-8859-2' \
--header='Accept-Language: hr' \
http://fly.srk.fer.hr/
Specification of an empty string as the header value
will clear all previous user-defined headers.
--proxy-user=user
--proxy-passwd=password
Specify the username user and password password for
authentication on a proxy server. Wget will encode
them using the `basic' authentication scheme.
Security considerations similar to those with --http-
passwd pertain here as well.
--referer=url
Include `Referer: url' header in HTTP request. Useful
for retrieving documents with server-side processing
that assume they are always being retrieved by inter-
active web browsers and only come out properly when
Referer is set to one of the pages that point to them.
-s
--save-headers
Save the headers sent by the HTTP server to the file,
preceding the actual contents, with an empty line as
the separator.
-U agent-string
--user-agent=agent-string
Identify as agent-string to the HTTP server.
The HTTP protocol allows the clients to identify them-
selves using a `User-Agent' header field. This
enables distinguishing the WWW software, usually for
statistical purposes or for tracing of protocol viola-
tions. Wget normally identifies as Wget/version, ver-
sion being the current version number of Wget.
However, some sites have been known to impose the pol-
icy of tailoring the output according to the
`User-Agent'-supplied information. While conceptually
this is not such a bad idea, it has been abused by
servers denying information to clients other than
`Mozilla' or Microsoft `Internet Explorer'. This
option allows you to change the `User-Agent' line
issued by Wget. Use of this option is discouraged,
unless you really know what you are doing.
FTP Options
-nr
--dont-remove-listing
Don't remove the temporary .listing files generated by
FTP retrievals. Normally, these files contain the raw
directory listings received from FTP servers. Not
removing them can be useful for debugging purposes, or
when you want to be able to easily check on the con-
tents of remote server directories (e.g. to verify
that a mirror you're running is complete).
Note that even though Wget writes to a known filename
for this file, this is not a security hole in the sce-
nario of a user making .listing a symbolic link to
/etc/passwd or something and asking `root' to run Wget
in his or her directory. Depending on the options
used, either Wget will refuse to write to .listing,
making the globbing/recursion/time-stamping operation
fail, or the symbolic link will be deleted and
replaced with the actual .listing file, or the listing
will be written to a .listing.number file.
Even though this situation isn't a problem, though,
`root' should never run Wget in a non-trusted user's
directory. A user could do something as simple as
linking index.html to /etc/passwd and asking `root' to
run Wget with -N or -r so the file will be overwrit-
ten.
-g on/off
--glob=on/off
Turn FTP globbing on or off. Globbing means you may
use the shell-like special characters (wildcards),
like *, ?, [ and ] to retrieve more than one file from
the same directory at once, like:
wget ftp://gnjilux.srk.fer.hr/*.msg
By default, globbing will be turned on if the URL con-
tains a globbing character. This option may be used
to turn globbing on or off permanently.
You may have to quote the URL to protect it from being
expanded by your shell. Globbing makes Wget look for
a directory listing, which is system-specific. This
is why it currently works only with Unix FTP servers
(and the ones emulating Unix `ls' output).
--passive-ftp
Use the passive FTP retrieval scheme, in which the
client initiates the data connection. This is some-
times required for FTP to work behind firewalls.
--retr-symlinks
Usually, when retrieving FTP directories recursively
and a symbolic link is encountered, the linked-to file
is not downloaded. Instead, a matching symbolic link
is created on the local filesystem. The pointed-to
file will not be downloaded unless this recursive
retrieval would have encountered it separately and
downloaded it anyway.
When --retr-symlinks is specified, however, symbolic
links are traversed and the pointed-to files are
retrieved. At this time, this option does not cause
Wget to traverse symlinks to directories and recurse
through them, but in the future it should be enhanced
to do this.
Note that when retrieving a file (not a directory)
because it was specified on the commandline, rather
than because it was recursed to, this option has no
effect. Symbolic links are always traversed in this
case.
Recursive Retrieval Options
-r
--recursive
Turn on recursive retrieving.
-l depth
--level=depth
Specify recursion maximum depth level depth. The
default maximum depth is 5.
--delete-after
This option tells Wget to delete every single file it
downloads, after having done so. It is useful for
pre-fetching popular pages through a proxy, e.g.:
wget -r -nd --delete-after http://whatever.com/~popular/page/
The -r option is to retrieve recursively, and -nd to
not create directories.
Note that --delete-after deletes files on the local
machine. It does not issue the DELE command to remote
FTP sites, for instance. Also note that when
--delete-after is specified, --convert-links is
ignored, so .orig files are simply not created in the
first place.
-k
--convert-links
After the download is complete, convert the links in
the document to make them suitable for local viewing.
This affects not only the visible hyperlinks, but any
part of the document that links to external content,
such as embedded images, links to style sheets, hyper-
links to non-HTML content, etc.
Each link will be changed in one of the two ways:
o The links to files that have been downloaded by
Wget will be changed to refer to the file they
point to as a relative link.
Example: if the downloaded file /foo/doc.html
links to /bar/img.gif, also downloaded, then the
link in doc.html will be modified to point to
../bar/img.gif. This kind of transformation works
reliably for arbitrary combinations of
directories.
o The links to files that have not been downloaded
by Wget will be changed to include host name and
absolute path of the location they point to.
Example: if the downloaded file /foo/doc.html
links to /bar/img.gif (or to ../bar/img.gif), then
the link in doc.html will be modified to point to
http://hostname/bar/img.gif.
Because of this, local browsing works reliably: if a
linked file was downloaded, the link will refer to its
local name; if it was not downloaded, the link will
refer to its full Internet address rather than pre-
senting a broken link. The fact that the former links
are converted to relative links ensures that you can
move the downloaded hierarchy to another directory.
Note that only at the end of the download can Wget
know which links have been downloaded. Because of
that, the work done by -k will be performed at the end
of all the downloads.
-K
--backup-converted
When converting a file, back up the original version
with a .orig suffix. Affects the behavior of -N.
-m
--mirror
Turn on options suitable for mirroring. This option
turns on recursion and time-stamping, sets infinite
recursion depth and keeps FTP directory listings. It
is currently equivalent to -r -N -l inf -nr.
-p
--page-requisites
This option causes Wget to download all the files that
are necessary to properly display a given HTML page.
This includes such things as inlined images, sounds,
and referenced stylesheets.
Ordinarily, when downloading a single HTML page, any
requisite documents that may be needed to display it
properly are not downloaded. Using -r together with
-l can help, but since Wget does not ordinarily dis-
tinguish between external and inlined documents, one
is generally left with ``leaf documents'' that are
missing their requisites.
For instance, say document 1.html contains an `'
tag referencing 1.gif and an `' tag pointing to
external document 2.html. Say that 2.html is similar
but that its image is 2.gif and it links to 3.html.
Say this continues up to some arbitrarily high number.
If one executes the command:
wget -r -l 2 http://I/1.html
then 1.html, 1.gif, 2.html, 2.gif, and 3.html will be
downloaded. As you can see, 3.html is without its
requisite 3.gif because Wget is simply counting the
number of hops (up to 2) away from 1.html in order to
determine where to stop the recursion. However, with
this command:
wget -r -l 2 -p http://I/1.html
all the above files and 3.html's requisite 3.gif will
be downloaded. Similarly,
wget -r -l 1 -p http://I/1.html
will cause 1.html, 1.gif, 2.html, and 2.gif to be
downloaded. One might think that:
wget -r -l 0 -p http://I/1.html
would download just 1.html and 1.gif, but unfortu-
nately this is not the case, because -l 0 is equiva-
lent to -l inf---that is, infinite recursion. To
download a single HTML page (or a handful of them, all
specified on the commandline or in a -i URL input
file) and its (or their) requisites, simply leave off
-r and -l:
wget -p http://I/1.html
Note that Wget will behave as if -r had been speci-
fied, but only that single page and its requisites
will be downloaded. Links from that page to external
documents will not be followed. Actually, to download
a single page and all its requisites (even if they
exist on separate websites), and make sure the lot
displays properly locally, this author likes to use a
few options in addition to -p:
wget -E -H -k -K -p http://I/I
To finish off this topic, it's worth knowing that
Wget's idea of an external document link is any URL
specified in an `' tag, an `' tag, or a
`' tag other than `'.
Recursive Accept/Reject Options
-A acclist --accept acclist
-R rejlist --reject rejlist
Specify comma-separated lists of file name suffixes or
patterns to accept or reject.
-D domain-list
--domains=domain-list
Set domains to be followed. domain-list is a comma-
separated list of domains. Note that it does not turn
on -H.
--exclude-domains domain-list
Specify the domains that are not to be followed..
--follow-ftp
Follow FTP links from HTML documents. Without this
option, Wget will ignore all the FTP links.
--follow-tags=list
Wget has an internal table of HTML tag / attribute
pairs that it considers when looking for linked docu-
ments during a recursive retrieval. If a user wants
only a subset of those tags to be considered, however,
he or she should be specify such tags in a comma-sepa-
rated list with this option.
-G list
--ignore-tags=list
This is the opposite of the --follow-tags option. To
skip certain HTML tags when recursively looking for
documents to download, specify them in a comma-sepa-
rated list.
In the past, the -G option was the best bet for down-
loading a single page and its requisites, using a com-
mandline like:
wget -Ga,area -H -k -K -r http://I/I
However, the author of this option came across a page
with tags like `' and came
to the realization that -G was not enough. One can't
just tell Wget to ignore `', because then
stylesheets will not be downloaded. Now the best bet
for downloading a single page and its requisites is
the dedicated --page-requisites option.
-H
--span-hosts
Enable spanning across hosts when doing recursive
retrieving.
-L
--relative
Follow relative links only. Useful for retrieving a
specific home page without any distractions, not even
those from the same hosts.
-I list
--include-directories=list
Specify a comma-separated list of directories you wish
to follow when downloading Elements of list may con-
tain wildcards.
-X list
--exclude-directories=list
Specify a comma-separated list of directories you wish
to exclude from download Elements of list may contain
wildcards.
-np
--no-parent
Do not ever ascend to the parent directory when
retrieving recursively. This is a useful option,
since it guarantees that only the files below a cer-
tain hierarchy will be downloaded.
EXAMPLES
The examples are divided into three sections loosely based
on their complexity.
Simple Usage
o Say you want to download a URL. Just type:
wget http://fly.srk.fer.hr/
o But what will happen if the connection is slow, and
the file is lengthy? The connection will probably
fail before the whole file is retrieved, more than
once. In this case, Wget will try getting the file
until it either gets the whole of it, or exceeds the
default number of retries (this being 20). It is easy
to change the number of tries to 45, to insure that
the whole file will arrive safely:
wget --tries=45 http://fly.srk.fer.hr/jpg/flyweb.jpg
o Now let's leave Wget to work in the background, and
write its progress to log file log. It is tiring to
type --tries, so we shall use -t.
wget -t 45 -o log http://fly.srk.fer.hr/jpg/flyweb.jpg &
The ampersand at the end of the line makes sure that
Wget works in the background. To unlimit the number
of retries, use -t inf.
o The usage of FTP is as simple. Wget will take care of
login and password.
wget ftp://gnjilux.srk.fer.hr/welcome.msg
o If you specify a directory, Wget will retrieve the
directory listing, parse it and convert it to HTML.
Try:
wget ftp://prep.ai.mit.edu/pub/gnu/
links index.html
Advanced Usage
o You have a file that contains the URLs you want to
download? Use the -i switch:
wget -i I
If you specify - as file name, the URLs will be read
from standard input.
o Create a five levels deep mirror image of the GNU web
site, with the same directory structure the original
has, with only one try per document, saving the log of
the activities to gnulog:
wget -r http://www.gnu.org/ -o gnulog
o The same as the above, but convert the links in the
HTML files to point to local files, so you can view
the documents off-line:
wget --convert-links -r http://www.gnu.org/ -o gnulog
o Retrieve only one HTML page, but make sure that all
the elements needed for the page to be displayed, such
as inline images and external style sheets, are also
downloaded. Also make sure the downloaded page refer-
ences the downloaded links.
wget -p --convert-links http://www.server.com/dir/page.html
The HTML page will be saved to
www.server.com/dir/page.html, and the images,
stylesheets, etc., somewhere under www.server.com/,
depending on where they were on the remote server.
o The same as the above, but without the www.server.com/
directory. In fact, I don't want to have all those
random server directories anyway---just save all those
files under a download/ subdirectory of the current
directory.
wget -p --convert-links -nH -nd -Pdownload \
http://www.server.com/dir/page.html
o Retrieve the index.html of www.lycos.com, showing the
original server headers:
wget -S http://www.lycos.com/
o Save the server headers with the file, perhaps for
post-processing.
wget -s http://www.lycos.com/
more index.html
o Retrieve the first two levels of wuarchive.wustl.edu,
saving them to /tmp.
wget -r -l2 -P/tmp ftp://wuarchive.wustl.edu/
o You want to download all the GIFs from a directory on
an HTTP server. You tried wget
http://www.server.com/dir/*.gif, but that didn't work
because HTTP retrieval does not support globbing. In
that case, use:
wget -r -l1 --no-parent -A.gif http://www.server.com/dir/
More verbose, but the effect is the same. -r -l1
means to retrieve recursively, with maximum depth of
1. --no-parent means that references to the parent
directory are ignored, and -A.gif means to download
only the GIF files. -A "*.gif" would have worked too.
o Suppose you were in the middle of downloading, when
Wget was interrupted. Now you do not want to clobber
the files already present. It would be:
wget -nc -r http://www.gnu.org/
o If you want to encode your own username and password
to HTTP or FTP, use the appropriate URL syntax.
wget ftp://hniksic:mypassword@unix.server.com/.emacs
Note, however, that this usage is not advisable on
multi-user systems because it reveals your password to
anyone who looks at the output of `ps'.
o You would like the output documents to go to standard
output instead of to files?
wget -O - http://jagor.srce.hr/ http://www.srce.hr/
You can also combine the two options and make
pipelines to retrieve the documents from remote
hotlists:
wget -O - http://cool.list.com/ | wget --force-html -i -
Very Advanced Usage
o If you wish Wget to keep a mirror of a page (or FTP
subdirectories), use --mirror (-m), which is the
shorthand for -r -l inf -N. You can put Wget in the
crontab file asking it to recheck a site each Sunday:
crontab
0 0 * * 0 wget --mirror http://www.gnu.org/ -o /home/me/weeklog
o In addition to the above, you want the links to be
converted for local viewing. But, after having read
this manual, you know that link conversion doesn't
play well with timestamping, so you also want Wget to
back up the original HTML files before the conversion.
Wget invocation would look like this:
wget --mirror --convert-links --backup-converted \
http://www.gnu.org/ -o /home/me/weeklog
o But you've also noticed that local viewing doesn't
work all that well when HTML files are saved under
extensions other than .html, perhaps because they were
served as index.cgi. So you'd like Wget to rename all
the files served with content-type text/html to
name.html.
wget --mirror --convert-links --backup-converted \
--html-extension -o /home/me/weeklog \
http://www.gnu.org/
Or, with less typing:
wget -m -k -K -E http://www.gnu.org/ -o /home/me/weeklog
FILES
/usr/local/etc/wgetrc
Default location of the global startup file.
.wgetrc
User startup file.
BUGS
You are welcome to send bug reports about GNU Wget to
<`bug-wget@gnu.org'>.
Before actually submitting a bug report, please try to
follow a few simple guidelines.
1. Please try to ascertain that the behaviour you see
really is a bug. If Wget crashes, it's a bug. If
Wget does not behave as documented, it's a bug. If
things work strange, but you are not sure about the
way they are supposed to work, it might well be a bug.
2. Try to repeat the bug in as simple circumstances as
possible. E.g. if Wget crashes while downloading wget
-rl0 -kKE -t5 -Y0 http://yoyodyne.com -o /tmp/log, you
should try to see if the crash is repeatable, and if
will occur with a simpler set of options. You might
even try to start the download at the page where the
crash occurred to see if that page somehow triggered
the crash.
Also, while I will probably be interested to know the
contents of your .wgetrc file, just dumping it into
the debug message is probably a bad idea. Instead,
you should first try to see if the bug repeats with
.wgetrc moved out of the way. Only if it turns out
that .wgetrc settings affect the bug, mail me the rel-
evant parts of the file.
3. Please start Wget with -d option and send the log (or
the relevant parts of it). If Wget was compiled with-
out debug support, recompile it. It is much easier to
trace bugs with debug support on.
4. If Wget has crashed, try to run it in a debugger, e.g.
`gdb `which wget` core' and type `where' to get the
backtrace.
SEE ALSO
GNU Info entry for wget.
AUTHOR
Originally written by Hrvoje Niksic .
COPYRIGHT
Copyright (c) 1996, 1997, 1998, 2000, 2001 Free Software
Foundation, Inc.
Permission is granted to make and distribute verbatim
copies of this manual provided the copyright notice and
this permission notice are preserved on all copies.
Permission is granted to copy, distribute and/or modify
this document under the terms of the GNU Free Documenta-
tion License, Version 1.1 or any later version published
by the Free Software Foundation; with the Invariant Sec-
tions being ``GNU General Public License'' and ``GNU Free
Documentation License'', with no Front-Cover Texts, and
with no Back-Cover Texts. A copy of the license is
included in the section entitled ``GNU Free Documentation
License''.
2003-04-01 GNU Wget 1.8.2 WGET(1)