Curl Command response output has new line and Extra characters - shell

I am using below curl command to download file from url. But the output file has new line and extra characters due to which Tiff file getting corrupt.
curl -k -u Username:Password URL >/Test.Tiff
Sample Test.Tiff has below data
1.
2.
3.IDCFILE87918
4.II*ÿûÞ©¥zKÿJÛï_]ÿÿÿ÷ÿÞï¹×ëÿ¤ÿO]
5¿ûÕÿÿ¯zê¿ß£0•¿þÛ¯kÚÿ¹5Éöûé_u_éwÕzkJï·_¯¯ßþýuw]í~þžmúºßÿzÈfçúîC7½õëÿÛ¯ô¿Z[6.ý®Úö·4ýý ~«v×ÿº^Ÿ¿í¾Ýÿzuýëÿ÷×]}ûÿõé‰ÿ¿m/KûÿµÛ_ý¾×Oín½}+wýzíýö¿õÿî—7.ékñN¿û­Sߦ=ºì%±N—í¯i_Û¶¬:×·m{
8.ÿ­¶ÿím¿í/ívÒ®ÒP­¯Õ¥¶¿}SÛúì%Ú_kûim­ú«i·V½»
9..Âýt•¿ßoÛ]¦Òý´»KßØaPaa…å87M…VÂúý?ÿa„˜ei
First three lines where line no 1 and 2 is newlines which is coming as ^M through VI editor are extra which should not be there.When i delete first 3 lines and save the file then i am able to open the file.
Let me know how first three lines are getting appended.

Update: Try greping the Curl output to remove blank lines, like this:
curl -k -u <username>:<password> <url> | grep -v '^$' > /Test.Tiff
Curl also has the --output <name> option to redirect output to a file. You may first output the response to a file and then use it as grep input:
curl -k -u <username>:<password> <url> > curl_out.txt
grep -v '^$' curl_out.txt > Test.Tiff

Related

Loop though lines, Curl for a value, Store the Value

I'm trying to read a file, each line of which is a CVE ID. For each CVE, I want to make a curl to get its severity and then store that result in a new CSV file with the format cve-id,cve-severity.
Below is the script I'm using, which reads the IDs correctly, but doesn't make the curl call correctly. When I run this, it just outputs empty values for each curl call.
I've tried using back ticks instead of the $(), but same result. What am I doing wrong here?
#!/bin/bash
filename="cves.csv"
while read line
do
echo "$line"
cve_result=$(curl -s "https://cve.circl.lu/api/cve/${line}")
echo "$cve_result"
done < $filename
Also tried these variations, all with same (empty) result:
cve_result=$(curl -s "https://cve.circl.lu/api/cve/${line}")
cve_result=`curl -s "https://cve.circl.lu/api/cve/${line}"`
cve_result=$(curl -s "https://cve.circl.lu/api/cve/$line")
cve_result=`curl -s "https://cve.circl.lu/api/cve/$line"`
cve_result=$(curl -s https://cve.circl.lu/api/cve/$line)
cve_result=`curl -s https://cve.circl.lu/api/cve/$line`
Here is a sample of the CSV file:
CVE-2014-0114
CVE-2014-9970
CVE-2015-1832
CVE-2015-2080
CVE-2015-7521
Your code works for me (ie, each curl call pulls down a bunch of data).
If I convert my (linux) file to contain windows/dos line endings (\r\n) then the curl calls don't generate anything.
At this point I'm guessing your input file has windows/dos line endings (you can verify by running head -2 cves.csv | od -c and you should see the sequence \r \n at the end of each line).
Assuming this is your issue then you need to remove the \r characters; a couple options:
dos2unix cves.csv - only have to run once as this will update the file
curl ... ${line//$'\r'/}" - use parameter substitution to strip out the \r

curl: (3) URL using bad/illegal format or missing URL in bash Windows

I am trying to download PDF files from a list of URLs in a .txt file, with one URL per line. ('urls.txt')
When I use the following command, where the URL I used is an exact copy-paste of the first line of the .txt file:
$ curl http://www.isuresults.com/results/season1617/gpchn2016/gpchn2016_protocol.pdf -o 'test.pdf'
the pdf downloads perfectly. However when I use this command:
xargs -n 1 curl -O < urls.txt
Then I receive a 'curl: (3) URL using bad/illegal format or missing URL' error x times the amount of URLs listed in the .txt file. I have tested many of the URLS individually, and they all seem to download properly.
How can I fix this?
Edit - the first three lines of urls.txt reads as follows:
http://www.isuresults.com/results/season1718/gpf1718/gpf2017_protocol.pdf
http://www.isuresults.com/results/season1718/gpcan2017/gpcan2017_protocol.pdf
http://www.isuresults.com/results/season1718/gprus2017/gprus2017_protocol.pdf
SOLVED: As per the comment below, the issue was that the .txt file was in DOS/Windows format. I converted it using the line:
$ dos2unix urls.txt
and then the files downloaded perfectly using my original line of code. See this thread for more info: Are shell scripts sensitive to encoding and line endings?
Thank you to all who responded!
Try using
xargs -n 1 -t -a urls.txt curl -O
here the -a option reads list from a file rather than standard input
EDIT:
As #GordonDavisson mentioned, it looks like you may have a file with DOS line endings, you can potentially clean these up using sed before passing to xargs
sed 's/\r//' < urls.txt | xargs -n 1 -t curl -O

Loop to read values from two files and use them as variables in a curl in shell

I need to create a loop so I can use values listed in two text files as variables into a curl command.
For example, let say there is a list named destinations.txt that looks like this:
facebook.com
pinterest.com
instagram.com
And there is another file named keys.txt which includes API keys to make calls to each destination, this file looks like:
abcdefghij-123
mnopqrstuv-456
qwertyuiop-789
The idea of this loop is to pull this data so I can run 3 curls each using the data coming from the line they are. This is an example considering that $destination and $key are the values pulled from the txt files.
curl -k 'https://'"$destination"'//api/?type=op&cmd=asdasdf='"$key"
These would be the expected results:
1st round:
curl -k https://facebook.com//api/?type=op&cmd=asdasdf=abcdefghij-123
2nd round:
curl -k https://pinterest.com//api/?type=op&cmd=asdasdf=mnopqrstuv-456
3rd round:
curl -k https://instagram.com//api/?type=op&cmd=asdasdf=qwertyuiop-789
I've tried multiple times with nested while/for and paste, however, the results are not as expected since data is duplicated.
You can combine the two files using the paste command and awk command and the call it iteratively using for loop:
paste "destinations.txt" "keys.txt" > combined.txt
awk '{ printf "https://%s//api/?type=op&cmd=asdasdf=%s\n", $1, $2 }' combined.txt > allCurls.txt
for crl in `more +1 allCurls.txt`
do
echo `curl -k $crl`
done
If you want to redirect the output of the curl to some file:
count=1
for crl in `more +1 allCurls.txt`
do
# use the awk with the variable count,
#-v count=$count uses the shell variable in awk
filename=`awk -v count=$count 'NR==count{print $1}' destinations.txt`
filename="${filename}_curl_result.txt "
#redirect the curl output to filename
curl -k $crl > ${filename}
count=$((count+1))
done
Try with something like that :
while read -u 4 destination && read -u 5 key; do
curl -k https://${destination}/blablabal/${key}
done 4<destination.txt 5<key.txt
As long as those 2 files are in sync it should work

grepping the output of a curl command within bash script

I am currently attempting to make a script that when i enter the name of a vulnerability it will return to me the CVSS3 scores from tenable.
So far my plan is:
Curl the page
Grep the content i want
output the grepped CVSS3 score
when running myscript however grep is throwing the following error:
~/Documents/Tools/Scripts ❯ ./CVSS3-Grabber.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 30964 0 30964 0 0 28355 0 --:--:-- 0:00:01 --:--:-- 28355
grep: unrecognized option '-->Nessus<!--'
Usage: grep [OPTION]... PATTERNS [FILE]...
Try 'grep --help' for more information.
This has me very confused as when i run this in the command line i curl the content to sample.txt and then using the exact same grep syntax:
grep $pagetext -e CVSS:3.0/E:./RL:./RC:.
it returns to me the content i need, however when i run it via my script below...
#! /bin/bash
pagetext=$(curl https://www.tenable.com/plugins/nessus/64784)
cvss3_temporal=$(grep $pagetext -e CVSS:3.0/E:./RL:./RC:.)
echo $cvss3_temporal
i receive the errors above!
I believe this is because the '--' are causing grep to think the text inside the file that it is an instruction which grep doesnt know hence the error. I have tried copying the output of the curl to a text file and then grepping that rather than straight from the curl but still no joy. Does anyone know of a method to get grep to ignore '--' or any flags when reading text? Or alternatively if i can configure curl so that it only brings back text and no symbols?
Thanks in advance!
You don't need to store curl response in a variable, just pipe grep after curl like this:
cvss3_temporal=$(curl -s https://www.tenable.com/plugins/nessus/64784 |
grep -F 'CVSS:3.0/E:./RL:./RC:.')
Note use of -s in curl to suppress progress and -F in grep to make sure you are searching for a fixed string.
Grep filters a given file or standard input if none was given. In bash, you can use the <<< here-word syntax to send the variable content to grep's input:
grep -e 'CVSS:3.0/E:./RL:./RC:.' <<< "$pagetext"
Or, if you don't need the page anywhere else, you can pipe the output from curl directly to grep:
curl https://www.tenable.com/plugins/nessus/64784 | grep -e 'CVSS:3.0/E:./RL:./RC:.'

Unable to capture cURL output in a text file?

I am using this command to capture time taken by cURL command:
time curl -v -k http://10.164.128.232:8011/oam/server/HeartBeat >> abc.txt
This leaves abc.txt blank. I further tried this:
time curl -v -k http://10.164.128.232:8011/oam/server/HeartBeat 2>> bcde.txt
I was expecting this command to write complete console output on my text file, but it din't capture time in bcde.txt.
I am unable to find a way using which I can capture cURL's output alongside time taken by it.
Please assist me on this.
The time command may think that the redirection is part of the command being timed. In that case, you can get past it with grouping:
(time curl -v -k http://10.164.128.232:8011/oam/server/HeartBeat) >> abc.txt
(time curl -v -k http://10.164.128.232:8011/oam/server/HeartBeat) 2>> abc.txt
This worked for me!

Resources