I tried something like:
wget ftp://username:password#ftp.somedomain.com/bla/blabla.txt -x /home/weptile/public_html/bla/blabla.txt
Appereantly -x writes the output :) I thought it was overwriting the file I need.
So what I'm trying to do is do daily updates on blabla.txt in this specific subdirectory from an external ftp file. I want to get the file from ftp and overwrite the old file on my server.
Use wget -N to overwrite existing files.
If you get stuck on stuff like this, try man wget or heck, even Google.
Related
Im trying to copy a zip file located on a server by a ssh2 library.
the way i'm about to do is using less command and write it down on client side.
Less -r -L -f zipfile
but the output file is bigger than the original.
i know this not a good practice but i have to.
so how can i handle this to have my zip file on the client machine?
Is Less an mandatory command to do that ?
You can simply use scp to achieve that by providing user and host and then typing the directory, where to copy the file from the server to local host, like on the example below:
scp your_username#remotehost.edu:foobar.txt /some/local/directory
I want to download all files from a http server like:
http://dl.mysite.com/files/
and I also want to go inside each folder inside that folder.
But I do want to download only those files that have "480p" in their name.
What is the easiest solution for that using wget?
edit:
I want to have that script to be run each night from 2am to 6am to sync those files from that server to my PC.
The following wget command should work with the following flags:
wget -A "*480p*" -r -np -nc --no-check-certificate -e robots=off http://dl.mysite.com/files/
Explanation:
-A "480p" your pattern
-r, recursively recursively look through the folders
-np, --no-parent ignore links to a higher directory
-nc, --no-clobber If a file is downloaded more than once in the same directory, Wget’s behavior depends on a few options, including ‘-nc’. In certain cases, the local file will be clobbered, or overwritten, upon repeated download. In other cases it will be preserved.
--no-check-certificate Don’t check the server certificate against the available certificate authorities.
-e, --execute command A command thus invoked will be executed after the commands in .wgetrc
robots=off robot exclusion
More information on wget flags can be found at the official GNU manual page: https://www.gnu.org/software/wget/manual/wget.html
With regards to it being run once per day, you may want to read up on Cron jobs. Taken from the documentation page at: https://help.ubuntu.com/community/CronHowto
A crontab file is a simple text file containing a list of commands meant to be run at specified times. It is edited using the crontab command. The commands in the crontab file (and their run times) are checked by the cron daemon, which executes them in the system background.
So basically you need to put your wget command into a file, and set the cron to run this file at the specified time.
Note: Windows does not have a native implementation of Cron, but you can achieve the same effect using the Windows Task Scheduler.
Let's assume I have a file request.txt that looks like:
GET / HTTP/1.0
Some_header: value
text=blah
I tried:
cat request.txt | openssl -s_client -connect server.com:443
Unfortunately it didn't work and I need to manually copy & paste the file contents. How can I do it within a script?
cat is not ideally suited to download remote files, it's best used for files local to the file system running the script. To download a remote file you have other commands that you can use which handle this better.
If your environment has wget installed you can download the file by URL. Here is a link for some examples on how it's used. That would look like:
wget https://server.com/request.txt
If your environment has curl installed you can download the file by URL. Here is a link for some examples on how it's used. That would look like:
curl -O https://server.com/request.txt
Please note that if you want to store the response in a variable for further modification you can do this as well with a bit more work.
Also worth noting is that if you really must use cat to download a remote file it's possible, but it may require ssh to be used and I'm not a fan of using that method as it requires access to a file via ssh where it's already publicly available over HTTP/S. There isn't a practical reason I can think of to go about it this way, but for the sake of completion I wanted to mention that it could be done but probably shouldn't.
Im trying to do a bash script and i need to download certain files with wget
like libfat-nds-1.0.11.tar.bz2 but after some times the version of this file may change so i would like to download a file that start with libfatnds and ends in .tar.bz2 .Is this possible with wget?
Using only wget, it can be achieved by specifying filename with wildcards in the list of accepted extensions.
wget -r -np -nd --accept='libfat-nds-*.tar.bz2'
The problem is that HTTP doesn't support wildcard downloads
. But if there is content listing enabled on the server or you have a index.html containing the available file names you could download that, extract the file name you need and then download the file with wget.
Something in this order
Download the index with curl
Use grep and/or sed to extract the exact file name
Download the file with wget (or curl)
If you pipe the commands you can do it on one line.
I have a list of files inside a TXT file that I need to upload to my FTP. Is there any Windows Bat file or Linux shell script that will process it?
cat ftp_filelist | xargs --max-lines=1 ncftpput -u user -p pass ftp_host /remote/path
You can use the wput command.
The syntax is somewhat like this
wput -i [name of the file.txt]
Go through this link
http://wput.sourceforge.net/wput.1.html
It works for linux.With this it will upload all the URLs given in the text file onto your ftp server one by one.
You may want to check out Jason Faulkners' batch script about which he wrote here.