Download CSV file in bash shell - bash

Hello friends I am trying to download a csv file from a external url I try with wget command but I obtain a 400 Error bad request if I paste the url directly in the browser I can download the csv file, is there another way to download this type of file or other solution? I need to have a file with the csv content.
Thanks

Have you escaped all special characters in the url such as & to \& or $ to \$

If it doesn't have to do anything with authentication/cookies, run some tool on your browser (like Live HTTP Headers) to capture headers. If you, then, mockup those fields in wget, that would get you very close to be like your "browser". That will also show you if there is any difference in encodings between wget data and browser data.
On other hand, you could also watch the server log files (if you have access to them).

Related

How to download .txt file from box.com on mac terminal?

I do not have much experiance with the command line and I have done my research and still wasn't able to solve my problem.
I need to download a .txt file from a folder on box.com.
I attempted using:
$ curl -o FILE URL
However, all I got was a empty text file that was named random numbers. I assumed the reason this happened is because the url of the file location does not end in .txt since it is in a file on box.com.
I also attempted:
$ wget FILE URL
However, my mac terminal doesn't seem to find that command
Is there a different command that can download the file from box.com? Or am I missing something?
You need to put your URL in quotes to avoid shell trying to parse it:
curl -o myfile.txt "http://example.com/"
Update: If the URL requires authentication
Modern browsers allow you to export requests as curl commands.
For example, in Chrome, you can:
open your file URL in a new tab
open Developer tools (View -> Developer -> Developer Tools)
switch to Network tab in the tools
refresh the page, a request should appear in the "Network" tab
Right-click the request, choose "Copy -> Copy as cURL"
paste the command in the shell
Here's how it looks for this page for example:

How to download all files from hidden directory

I have do download all log files from a virtual directory within a site. The access to virtual directory is forbidden but files are accessible.
I have manually entered the file names to download
dir="Mar"
for ((i=1;i<100;i++)); do
wget http://sz.dsyn.com/2014/$dir/log_$i.txt
done
The problem is the script is not generic and most of the time I need to find out how many files are there and tweak the for loop. Is there a way to trigger wget to fetch all files without me bothering to specify the exact count.
Note:
If I use the browser to view http://sz.dsyn.com/2014/$dir, it is 403 forbidden. I cant pull all the files via browser tool/extension.
First of all check this similar question If this is not what you are looking for, you need to generate a file of URLs within and feed wget. e.g.
wget --input-file=http://sz.dsyn.com/2014/$dir/filelist.txt
wget will have the same problem your browser has: it cannot read the directory. Just pull until your first failure then quit.

How to download a file with wget that starts with a word and it has a specific extension?

Im trying to do a bash script and i need to download certain files with wget
like libfat-nds-1.0.11.tar.bz2 but after some times the version of this file may change so i would like to download a file that start with libfatnds and ends in .tar.bz2 .Is this possible with wget?
Using only wget, it can be achieved by specifying filename with wildcards in the list of accepted extensions.
wget -r -np -nd --accept='libfat-nds-*.tar.bz2'
The problem is that HTTP doesn't support wildcard downloads
. But if there is content listing enabled on the server or you have a index.html containing the available file names you could download that, extract the file name you need and then download the file with wget.
Something in this order
Download the index with curl
Use grep and/or sed to extract the exact file name
Download the file with wget (or curl)
If you pipe the commands you can do it on one line.

How to wget/curl a dynamically built zip archive?

I'm trying to create a script that will download a dynamically built Initializr zip archive. Something like:
wget http://www.initializr.com/builder?mode=less&boot-hero&h5bp-htaccess&h5bp-nginx&h5bp-webconfig&h5bp-chromeframe&h5bp-analytics&h5bp-build&h5bp-iecond&h5bp-favicon&h5bp-appletouchicons&h5bp-scripts&h5bp-robots&h5bp-humans&h5bp-404&h5bp-adobecrossdomain&jquery&modernizrrespond&boot-css&boot-scripts
That url works in a browser, but not in a script. In a script it downloads a small portion of the archive, and saves it as builder?mode=less instead of initializr-less-verekia-3.0.zip.
builder?mode=less actually unzips, so it is just a misnamed a zip file. But it's missing probably 80% of the files it should have.
Anyone know how to script this?
urls contain shell metacharacters, so you'll have to quote the whole url:
wget 'your...url...here'
If the initializr website doesn't put a proper filename into the HTTP response headers, wget will use a best-guess version based on the url being requested. You can force it to write to a specific filename with
wget 'your url here' -O name_of_file.zip

copy file using an URL from command line

I have a batch script that is used to collect some data and upload that on other servers, using xcopy in a windows 7 command line. I want that script to collect some files that are on share point, so I need to get them using an URL and I need to login.
xcopy can't do the job, but are there other programs that can do it?
Theoretically, you can bend cURL to download a file from a SharePoint site. If site is publicly available, it's all very simple. If not, you'll have to authenticate first, and this might be a problem.
wget for windows maybe? http://gnuwin32.sourceforge.net/packages/wget.htm
The login part can be done using CURL, supplying the user name and password as post arguments. You can supply post args using -d or --data flag. Once you are logged in (and have required permission), you can fetch the required file and then simply transfer it using xcopy as you are already doing for the local files.

Resources