I am downloading multiple files using curl. The base URL for all the files is the same like
https://mydata.gov/daily/2017
The data in these directories are further grouped by date and file type. So the first data that I need has this directory
https://mydata.gov/daily/2017/001/17d/Roger001.gz
The second data being
https://mydata.gov/daily/2017/002/17d/Roger002.gz
I need to download up until the data for the last day of 2017 which is
https://mydata.gov/daily/2017/365/17d/Roger365.gz
How can I use curl or any other similar tool to download all the files to a single local folder, preferably adopting the original file names?
use for f in {001..365}; do curl https://mydata.gov/daily/2017/"$f"/17d/Roger"$f".gz -o /your-directory/Roger"$f".gz; done in bash terminal.
replace your-directory with your directory which you want to save files.
Related
I downloaded a big folder in Google Drive that was split into 5 parts:
Myfolder-20200911T192019Z-001.zip
Myfolder-20200911T192019Z-002.zip
Myfolder-20200911T192019Z-003.zip
Myfolder-20200911T192019Z-004.zip
Myfolder-20200911T192019Z-005.zip
I'm having some trouble to extract it into the single folder it originally is. Is there a straighforward way to unzip all of them together and recreate the original folder? Maybe some specific command in gzip? I didn't wish to install any program just to perform this task.
The above answer didn't work for me.
The files needed to be unzipped sequentially rather than just concatenated together.
For anyone coming here from a general search about combining zip files but specifically to combine multipart zips from Google Drive, I found this answer to be the one that worked:
https://superuser.com/questions/1255221/how-to-unzip-multiple-zip-files-into-a-single-directory-structure-e-g-google-d
i.e. for the above (creating an output directory to start with if necessary):
mkdir outputFolder
unzip "Myfolder-20200911T192019Z-00*" -d outputFolder
You can do cat Myfolder-20200911T192019Z* > total.zip to combine your zip files and then run unzip total.zip
I am trying to code a script to automatically process some of our daily ftp files.
I have already coded the files to download from the source ftp using WinSCP and calling it in a .bat file, and would ideally like to call it within the same bat. Scripting Language does not matter, as long as I can run/call it from the original batch.
I need will extract the date from a filename, and unzip the contents into corresponding folders. The source file is delivered automatically daily via FTP, and the filename is:
SOFL_CLAIM_TC201702270720000075.zip
The bolded section is the date that I would like to extract.
The contents of the .zip include two types of content, multiple PDFs and a .dat file.
For the supplied date of 20170227, the pdfs need to get extracted to a folder following the format:
\%root%\FNOIs\2017\02-Feb\02-27-2017
At the same time, the .dat file needs to get extracted to multiple folders following the format:
\%root%\Claim Add\2017 Claim Add\02-2017
\%root2%\vendorFTP\VendorFolder
After extracting, I need to move the source zip to
\%root%\Claim Add\2017 Claim Add\02-2017
What is the best way off accomplishing all of this?
I am assuming it would be the for /f batch command, but I am new to batch coding and cannot figure out how to start it from scratch.
I also have 7zip installed, but do not understand how to use the command-line options.
You have asked for a lot in one question, and not shown any code or demonstrated effort on your part.
For the first part, once you have the filename in a variable:
set FILENAME=SOFL_CLAIM_TC201702270720000075.zip
You can get the date part with:
echo %FILENAME:~13,-14%
The syntax: :13,-14 means "Remove the first 13 letters and the last 14 letters." That should leave you with just the date.
When you integrate that into your script, Show Your Code
Does anyone know how to batch download images relying only just a list of image URLs as the data source? I've looked through applications but all I could find was this: http://www.page2images.com/ (which only hardcodes a screenshot of every image on the URLs.)
So have a server running whatever you'd like.
Send an array of image names to the server - use whatever language you want but have the function do a for loop over the array
Execute wget https://image.png from the file (let's say you use NodeJS, this would be eval('wget ' + imgList[i]) - this will download everything to your current directory
Once the for loop is finished, the next step is to zip all your items tar -zcvf files.tar.gz ./ - this will create a tar ball of all the files within that directory
Download that tar
If you want to get fancy with this, you should create a randomly named directory and execute all your commands to point inside that directory. So you would say wget https://image.png ./jriyxjendoxh/ to get the file into the randomly named folder. Then at the end tar -zcvf files.tar.gz jriyxjendoxh/*
Then to make sure you have all the files downloaded, you can create a semaphore to put a block on the creation of the tar ball until the number of files is equal to the count of the passed in array. That would be a real fancy way to make sure all the files are downloaded.
Hi there You could try free download manager or if you have Linux use the wget command with the text source file
I have a .bin file that will comprise of 3 files
1. tar.gz file
2. .zip file
3. install.sh file
For now the install.sh file is empty. I am trying to write a shell script that should be able to extract the .zip file and copy the tar.gz file to a specific location when the *.bin file is executed on an Ubuntu machine. There is a Jenkins job that will pull in these 3 files to create the *.bin file
My Question is how do I access the tar.gz and .zip file from my shell script ?
There are two general tricks that I'm aware of for this sort of thing.
The first is to use a file format that will ignore invalid data and find the correct file contents automatically (I believe zip is one such format/tool).
When this is the case you just run the tool on the packed/concatenated file and let the tool do its job.
For formats and tools where that doesn't work and/or isn't possible the general trick is to embed markers in the concatenated file such that the original script ignores the data but can operate on itself to "extract" the embedded data so the other tool can operate on the extracted contents.
I've been using grepWin,
And I would like to somehow perform a series of queries for pdf links within .html files.
Thus far with the tool I have been using I just input each individual PDF name and copy the file paths of each reference.
This works fine but I have several hundred specific PDFs I need to find the references for,
And I was wondering if this was possible by using Cygwin or some other cmdline like Findstr to pipe a textfile of links to PDF's which I am searching.
I will give an example:
Spring-Summer.pdf
I would copy all of the paths to which the listed file is linked to within html files.
I then need that copied next to it, or in its own column within csv.
I'm not sure if it's at all probable anyone has asked this before. Currently I'm filling out a spreadsheet of links to these files for a website..
In Linux the following command will find all the html files which contain the specified string:
grep -Rl "Spring-Summer.pdf" <some root folder>
The -R option is to search recursively, and -l is to display just the file name without content.
The same should work on Cygwin.