I'm trying to extract data from an api with different ids stored in a text file but i keep getting the message "curl(3): illegal character found in url".
the text file contains:
362ae-235sa-3h26g-136gr
652ae-290sa-3h26g-132gr
394ae-275sa-k726g-106gr
362ae-257sa-3le0g-136gr
My script:
for j in $(cat ids.json)
do
curl -u "$workspace_username":"$workspace_password" \
"https://gateway.watsonplatform.net/assistant/api/v1/workspaces/$j/logsversion=2018-07-10" \
| jq '.' | jq -r '.logs[]' >> test.json
sleep 3
done
I'm new to this. Can anyone please help me with the script?
I could reproduce your problem with a CR attached to a line in the file ids.json. I just can assume that this is also your problem. I propose to fix your file.
You can do that automatically by removing all characters which are not part of your ids which are supposed to be in this file:
sed -i 's/[^0-9a-z-]//g' ids.json
Related
I'm trying to read a file, each line of which is a CVE ID. For each CVE, I want to make a curl to get its severity and then store that result in a new CSV file with the format cve-id,cve-severity.
Below is the script I'm using, which reads the IDs correctly, but doesn't make the curl call correctly. When I run this, it just outputs empty values for each curl call.
I've tried using back ticks instead of the $(), but same result. What am I doing wrong here?
#!/bin/bash
filename="cves.csv"
while read line
do
echo "$line"
cve_result=$(curl -s "https://cve.circl.lu/api/cve/${line}")
echo "$cve_result"
done < $filename
Also tried these variations, all with same (empty) result:
cve_result=$(curl -s "https://cve.circl.lu/api/cve/${line}")
cve_result=`curl -s "https://cve.circl.lu/api/cve/${line}"`
cve_result=$(curl -s "https://cve.circl.lu/api/cve/$line")
cve_result=`curl -s "https://cve.circl.lu/api/cve/$line"`
cve_result=$(curl -s https://cve.circl.lu/api/cve/$line)
cve_result=`curl -s https://cve.circl.lu/api/cve/$line`
Here is a sample of the CSV file:
CVE-2014-0114
CVE-2014-9970
CVE-2015-1832
CVE-2015-2080
CVE-2015-7521
Your code works for me (ie, each curl call pulls down a bunch of data).
If I convert my (linux) file to contain windows/dos line endings (\r\n) then the curl calls don't generate anything.
At this point I'm guessing your input file has windows/dos line endings (you can verify by running head -2 cves.csv | od -c and you should see the sequence \r \n at the end of each line).
Assuming this is your issue then you need to remove the \r characters; a couple options:
dos2unix cves.csv - only have to run once as this will update the file
curl ... ${line//$'\r'/}" - use parameter substitution to strip out the \r
I am trying to download PDF files from a list of URLs in a .txt file, with one URL per line. ('urls.txt')
When I use the following command, where the URL I used is an exact copy-paste of the first line of the .txt file:
$ curl http://www.isuresults.com/results/season1617/gpchn2016/gpchn2016_protocol.pdf -o 'test.pdf'
the pdf downloads perfectly. However when I use this command:
xargs -n 1 curl -O < urls.txt
Then I receive a 'curl: (3) URL using bad/illegal format or missing URL' error x times the amount of URLs listed in the .txt file. I have tested many of the URLS individually, and they all seem to download properly.
How can I fix this?
Edit - the first three lines of urls.txt reads as follows:
http://www.isuresults.com/results/season1718/gpf1718/gpf2017_protocol.pdf
http://www.isuresults.com/results/season1718/gpcan2017/gpcan2017_protocol.pdf
http://www.isuresults.com/results/season1718/gprus2017/gprus2017_protocol.pdf
SOLVED: As per the comment below, the issue was that the .txt file was in DOS/Windows format. I converted it using the line:
$ dos2unix urls.txt
and then the files downloaded perfectly using my original line of code. See this thread for more info: Are shell scripts sensitive to encoding and line endings?
Thank you to all who responded!
Try using
xargs -n 1 -t -a urls.txt curl -O
here the -a option reads list from a file rather than standard input
EDIT:
As #GordonDavisson mentioned, it looks like you may have a file with DOS line endings, you can potentially clean these up using sed before passing to xargs
sed 's/\r//' < urls.txt | xargs -n 1 -t curl -O
I need to creat a bash file in order to run a certain command on a server.
Here is one of the lines
Programm/programm.pl -k 1 -q --acc_number
where --acc_number needs a Comma-separated list of accession numbers, e.g. --acc_number Number13JJ2,Number0090D93,Number088DF.
but I actually have a file calle file_acc_number where I have each of the accession number in line such as :
Number13JJ2
Number0090D93
Number088DF
does someone have an idea how to parse this tab file and to directly put the accessio number in a comma-separated way and get :
Programm/programm.pl -k 1 -q --acc_number Number13JJ2,Number0090D93,Number088DF
Thank you for your help
Try using paste:
Programm/programm.pl -k 1 -q --acc_number `paste -s -d, file_acc_number`
Try running paste -s -d, file_acc_number first to understand whether you get what you require.
with an inline expansion maybe? Like this
Programm/programm.pl -k 1 -q --acc_number $(sed -z 's/\n/,/g' file_acc_number)
Make sure your file "file_acc_number" has no "new line" at the end of it.
With this, you will replace the "new line" character with a comma on the fly without affecting the original file.
I have something like that (it replaces a special sequence in request and then it sends it with curl).
SPECIAL_SEQUENCE=My_value
sed -i -e "s|SPECIAL_SEQUENCE|$SPECIAL_SEQUENCE|g" file.txt
curl http://127.0.0.1:1478/ -X POST -d #file.txt
It does work ok. However, the problem for me is that it leaves changed the file file.txt. I could undo sed at the end of this script, but I wouldn't like do this (becase I often interrupt this script with Ctrl+C).
Can you give me some other ideas to deal with this ? In other words the final form of request can be known only during executing mentioned script.
Instead of modifying the file, modify it in the memory and pipe it to curl:
SPECIAL_SEQUENCE=My_value
sed -e "s|SPECIAL_SEQUENCE|$SPECIAL_SEQUENCE|g" file.txt | curl http://127.0.0.1:1478/ -X POST
I am facing a strange problem. An answer to what I want to do already exists Here. I am trying to remove trailing commas from each line of a file containing thousands of lines. Like this -
This is my command -
sed -i 's/,*$//g' file_name.csv
However, the output I get is exactly the same as the image above and the trailing commas are not removed.
I think SED is not matching the pattern and thus failing to replace the commas. To check if there are any hidden characters in the file, I used VIM's :set list option -
There are only $ at the end of each line which is just what is expected.
I can't understand why the command is failing.
I can suggest you two options:
First One is my favorite.
dos2unix file
#####will work for Huge File also
then try to run the command.
Other way to do this:
cat file | tr -d '\r' > file
###may not work for huge file
then run the command.
tr -d '\r' < file > file.tmp ; mv file.tmp file
##will work for Huge File also
Thanks to #Nahuel for suggesting last command.