Can you guys tell me What is wget command and how to use it in Linux? I read some information about this command but I don't really know when should I use it and what are the uses of this command? Any guide?
In the UNIX/Linux environment, you can quickly move to directories using cd command in Terminal of window. If you ever want to get a file from internet and save it in the current directory, it will take a while for you to use the web browser to download the file and select the directory to store that file. With the wget tool available in UNIX/Linux, you can easily download file to current directory quickly.
Wget is a command-line utility, which is used for downloading files and content on the internet, which can be a website or an FTP site. Wget is very flexible and has many options to use for many different purposes.
Typically, downloaded files have the same name as file in URL, and information about the loading process is displayed on the screen.
You can save the downloaded file with a different name than the existing one, using the -O option. If the file with the specified name already exists, then the contents of the downloaded file overwrite the existing one.
Instead of displaying the information of the download process, you can save this information in a file, using the -o option.
By using this command, no information is printed on the screen. The log or process will be written to the log file, and the downloaded file will be named dloaded_file.img.
3. Automatically reload when corrupted
During the download, if the connection is unstable, the file download may be interrupted and failed. In these cases, you usually re-execute the load. However, instead of having to manually reload the download, wget provides us with an option to reload the download automatically every time the download is lost.
To do this, you use the -t argument in wget as follows:
In the above command, 5 is the number of times that wget will attempt to reload the file when the connection is lost during the download, instead of 5 times the number that you want wget to perform.
If we do not want to specify the number of reloads and want wget to repeat the load until the new stop, in this case you do as follows:
[[email protected] ~]# wget -t URL
4. Limit download speed
When you have a limited internet bandwidth and many applications share this connection, and if you need to load a large file, it will take up the bandwidth of other applications, making these applications inactive.
To limit download speed in wget, use the -limit-rate parameter as follows:
-l => indicates the depth of the web pages as levels. This means that wget will only go through the number of levels that you specify.
DEPTH => depth of site.
-r (recursive) => recursively, shared with -l.
-N is used to activate lock time for files.
URL is the basic path for a website where the load should be initialized
-k or -convert-links => instructs wget to convert links to other pages in the page loaded to local copies of those pages.
In addition to loading a web page on the machine, you can use the lynx command as follows:
@BillEssley has already provided you the best answer.
Basically, it's used to download files from the internet using http, https, or FTP protocol.
Let's say you've to perform the migration from the source server to destination then you can create a backup file or zip/tar archive on the source server and using wget command download files on the destination server.