Commands to download a website using termux or Linux terminal
13:49
The thing you are going to learn today can be risky. Please use this method with your own website. Without taking permission downloading content can be illegal. Today you are going to learnr the
You can copy any website easily. To copy a website, you need to have some knowledge of Linux related things. No, any issue I was taught by someone today, I'll share that with you. This would be the quick tutorial with the List of Commands.
All you need is the Internet and 2 android application or a Linux terminal with the internet.
2. Termux and termux api Android aplications/Linux terminal
3. Knowledge of Termux commands/Termux
To know about Termux and Termux API you can click here.
First Install Termux and Termux API (to do that please click on the above link).
Install wget using the command above.
Then I would run a command in the terminal listed below:
Thinsg you should do after installing metasploit
How to Install top 6 Desktop environments
Top 6 command-line based browsers for Linux
Commands to download a website using termux or Linux terminal
All you need is the Internet and 2 android application or a Linux terminal with the internet.
Requirements are:
1. Internet2. Termux and termux api Android aplications/Linux terminal
3. Knowledge of Termux commands/Termux
Install wget in Termux run the command:
pkg install termux -y
Install wget using Linux Terminal
sudo apt install wget -y
Install wget For CentOS/Fedora/RHEL
sudo yum install wgetNow run a simple command to copy the entire website. It can take some time depends on the number of posts on that website.
wget -r websitelink.comFor example, if I wish to download my website, I would do certain things.
First Install Termux and Termux API (to do that please click on the above link).
Install wget using the command above.
Then I would run a command in the terminal listed below:
wget -r uk2blogger.blogspot.com or wget --mirror --convert-links --page-requisites https://listofcommands.blogspot.com/ or wget --mirror -p --convert-links -P ./Local-Folder website.com or wget -m -k -p webSite.com or wget ‐‐limit-rate=20k ‐‐wait=60 ‐‐random-wait ‐‐mirror site-download.com or wget -m -r -linf -k -p -q -E -e robots=off site.comIt will automatically start to download each link one by one. That is enough to know about downloading a website. I'd suggest you try this command only for a known website. If you try it with some unknown websites that may be illegal and I'm not responsible for that.
Some more wget List of commands Listed below.
1. Download Single file you should have the direct link of it
$ wget http://any-file-link.iso2. Read URL’s from a text file -i stands for input file
$ wget -i ~/Desktop/Link-list-file.txt3. Download file with another name -O (minux capital O)
$ wget -O new-file-name.zip http://any-file-link.iso4. Download multiple files
$ wge http://file1-link.iso http://file2-link.tar5. Resume uncompleted download file or link
$ wget -c https://link-file.zip6. Download files in background
$ wget -b https://download-link.zip7. Download the restricted FTP and HTTP downloads with username and password
$ wget --http-user=narad --http-password=password http://file-link.iso and $ wget --ftp-user=narad --ftp-password=password http://file-link.iso8. Restrict download speed limits according to need
$ wget -c --limit-rate=100k9. Increase Retry Attempts in wget
$ wget --tries=17 http://link-of-the-file.zip10.Download entire website for local viewing
$ wget --mirror -p --convert-links -P ./Local-Folder website.com11. Reject file downloading
$ wget --reject=png website.com12. Quit the download above the certain limit(Q10m means quit more than 10mb)
$ wget -Q10m -i links.txt13. Skip https certificate checks
$ wget https://webpage.com/ --no-check-certificate14. Download and Extract tar file with single wget command
$ wget -q -O - http://wordpress.org/latest.tar.gz | tar -xzf - --strip-components=1 -C /var/www/html15. Downloading a file with new filename
$ wget link.zip -O NewFileName.zip16 Silent download any file
$ wget –q17. Download and browse full website anotehr command
$ wget -m -k -p webSite.com18. Download and Save to a specific direcotory
$ wget -P /directory/path http://file-link.zipChange User Agent of Wget
$ wget ‐‐refer=http://google.com ‐‐user-agent=”Mozilla/5.0 Firefox/4.0.1″ http://website-want-to-go-for.com19. Download all images from a website in a common folder
wget ‐‐directory-prefix=files/pictures ‐‐no-directories ‐‐recursive ‐‐no-clobber ‐‐accept jpg,gif,png,jpeg http://link.com/20. Download an entire website including all the linked pages and files
$ wget ‐‐execute robots=off ‐‐recursive ‐‐no-parent ‐‐continue ‐‐no-clobber http://web.com/21. Download a site without getting caught by server
wget ‐‐limit-rate=20k ‐‐wait=60 ‐‐random-wait ‐‐mirror site-download.com22. Download a complete website
wget -m -r -linf -k -p -q -E -e robots=off site.comWget github files
wget https://raw.linkof-the-file/file.htmlYou should also check: