Bash Script Using API for Online Image Vectorizing - bash

I am a user of vectorizer.io services, but I now need to batch convert and that requires a script utilizing their API. Their customer service sent me a script written in ChatGPT that users have put together, but it isn't working for me (API credentials were removed for posting). I receive this error:
curl: (26) Failed to open/read local data from file/application
Script:
#!/bin/bash
# Set the api credits code: found in menu / "email address button" / "Api Settings" (enable 'pay for additional credits' if you want to use more than the included 100 api calls)
API_CREDITS_CODE=""
# Set the directory to search for GIF files
DIRECTORY="/Users/User/Desktop/folderofimages"
# Find all GIF files in the specified directory
for file in $(find "$DIRECTORY" -name "*.gif"); do
# Extract the filename without the file extension
filename="${file%.*}"
# Call the curl command, replacing the X-CREDITS-CODE header with the api credits key and the input and output filenames
curl --http1.1 -H "Expect:" --header "X-CREDITS-CODE: $API_CREDITS_CODE" "https://api.vectorizer.io/v4.0/vectorize" -F "image=#$file" -F "format=svg" -F "colors=0" -F "model=auto" -F "algorithm=auto" -F "details=auto" -F "antialiasing=off" -F "minarea=5" -F "colormergefactor=5" -F "unit=auto" -F "width=0" -F "height=0" -F "roundness=default" -F "palette=" -vvv -o "${filename}.svg"
done
It would also be ideal if the script were to find and use images of all formats, not just GIF. That is not a necessity though.

Related

Sending file using CURL in windows

I'm trying to send a file using curl in windows.
Here's the command i'm using:
C:\curl>curl -X POST -F chat_id=#telegramchannel -F photo=#IMAGE.png https://api.telegram.org/bot812312342:XXXXXXXXXXXXXXXXXXXXXX/sendPhoto
and I keep getting this error:
curl: (26) Failed to open/read local data from file/application
does anybody know how to solve it and how to use the -F properly with files on windows?
Thanks
If telegramchannel is not a file, then you have to escape # with a backslash or use single quotes to encapsulate the content. As # has special meaning in curl context,
either
curl -X POST -F chat_id='#telegramchannel' -F photo=#IMAGE.png https://api.telegram.org/bot812312342:XXXXXXXXXXXXXXXXXXXXXX/sendPhoto
or
curl -X POST -F chat_id=\#telegramchannel -F photo=#IMAGE.png https://api.telegram.org/bot812312342:XXXXXXXXXXXXXXXXXXXXXX/sendPhoto

How to expand a filename inside a curl request

I need to do a curl upload with a file where I don't know the exact file name.
"curl
-F \"status=2\"
-F \"notify=1\"
-F \"ipa=#${FILE}\"
-F \"teams=${TEAM_ID}\"
-H \"X-HockeyAppToken: ${APITOKEN}\"
https://rink.hockeyapp.net/api/2/apps/${APPVERSION}/app_versions/upload"
This is done in gitlab-ci and the FILE variable is set to build/com.test.app_v*.ipa. The file I want to upload has a version number set and has the path build/com.test.app_v1.0.0.0.ipa. The problem I have now is that the * does not get expanded inside this curl call. I've tried it with an export before:
- export ABSOLUTE_FILENAME=${FILE}
- "curl
-F \"status=2\"
-F \"notify=1\"
-F \"ipa=#${ABSOLUTE_FILENAME}\"
-F \"teams=${TEAM_ID}\"
-H \"X-HockeyAppToken: ${APITOKEN}\"
https://rink.hockeyapp.net/api/2/apps/${APPVERSION}/app_versions/upload"
Still I'm gettin an error curl: (26) couldn't open file "build/com.test.app_v*.ipa" How can I expand the path to an absolute path before my curl upload?
With realpath command:
...
-F \"ipa=#$(realpath $FILE)\"
...

Check if a remote file exists in bash

I am downloading files with this script:
parallel --progress -j16 -a ./temp/img-url.txt 'wget -nc -q -P ./images/ {}; wget -nc -q -P ./images/ {.}_{001..005}.jpg'
Would it be possible to not download files, just check them on the remote side and if exists create a dummy file instead of downloading?
Something like:
if wget --spider $url 2>/dev/null; then
#touch img.file
fi
should work, but I don't know how to combine this code with GNU Parallel.
Edit:
Based on Ole's answer I wrote this piece of code:
#!/bin/bash
do_url() {
url="$1"
wget -q -nc --method HEAD "$url" && touch ./images/${url##*/}
#get filename from $url
url2=${url##*/}
wget -q -nc --method HEAD ${url%.jpg}_{001..005}.jpg && touch ./images/${url2%.jpg}_{001..005}.jpg
}
export -f do_url
parallel --progress -a urls.txt do_url {}
It works, but it fails for some files. I can not find consistency why it works for some files, why it fails for others. Maybe it has something with the last filename. Second wget tries to access the currect url, but the touch command after that simply does not create the desidered file. First wget always (correctly) downloads the main image without the _001.jpg, _002.jpg.
Example urls.txt:
http://host.com/092401.jpg (works correctly, _001.jpg.._005.jpg are downloaded)
http://host.com/HT11019.jpg (not works, only the main image is downloaded)
It is pretty hard to understand what it is you really want to accomplish. Let me try to rephrase your question.
I have urls.txt containing:
http://example.com/dira/foo.jpg
http://example.com/dira/bar.jpg
http://example.com/dirb/foo.jpg
http://example.com/dirb/baz.jpg
http://example.org/dira/foo.jpg
On example.com these URLs exist:
http://example.com/dira/foo.jpg
http://example.com/dira/foo_001.jpg
http://example.com/dira/foo_003.jpg
http://example.com/dira/foo_005.jpg
http://example.com/dira/bar_000.jpg
http://example.com/dira/bar_002.jpg
http://example.com/dira/bar_004.jpg
http://example.com/dira/fubar.jpg
http://example.com/dirb/foo.jpg
http://example.com/dirb/baz.jpg
http://example.com/dirb/baz_001.jpg
http://example.com/dirb/baz_005.jpg
On example.org these URLs exist:
http://example.org/dira/foo_001.jpg
Given urls.txt I want to generate the combinations with _001.jpg .. _005.jpg in addition to the original URL. E.g.:
http://example.com/dira/foo.jpg
becomes:
http://example.com/dira/foo.jpg
http://example.com/dira/foo_001.jpg
http://example.com/dira/foo_002.jpg
http://example.com/dira/foo_003.jpg
http://example.com/dira/foo_004.jpg
http://example.com/dira/foo_005.jpg
Then I want to test if these URLs exist without downloading the file. As there are many URLs I want to do this in parallel.
If the URL exists I want an empty file created.
(Version 1): I want the empty file created in a the similar directory structure in the dir images. This is needed because some of the images have the same name, but in different dirs.
So the files created should be:
images/http:/example.com/dira/foo.jpg
images/http:/example.com/dira/foo_001.jpg
images/http:/example.com/dira/foo_003.jpg
images/http:/example.com/dira/foo_005.jpg
images/http:/example.com/dira/bar_000.jpg
images/http:/example.com/dira/bar_002.jpg
images/http:/example.com/dira/bar_004.jpg
images/http:/example.com/dirb/foo.jpg
images/http:/example.com/dirb/baz.jpg
images/http:/example.com/dirb/baz_001.jpg
images/http:/example.com/dirb/baz_005.jpg
images/http:/example.org/dira/foo_001.jpg
(Version 2): I want the empty file created in the dir images. This can be done because all the images have unique names.
So the files created should be:
images/foo.jpg
images/foo_001.jpg
images/foo_003.jpg
images/foo_005.jpg
images/bar_000.jpg
images/bar_002.jpg
images/bar_004.jpg
images/baz.jpg
images/baz_001.jpg
images/baz_005.jpg
(Version 3): I want the empty file created in the dir images called the name from urls.txt. This can be done because only one of _001.jpg .. _005.jpg exists.
images/foo.jpg
images/bar.jpg
images/baz.jpg
#!/bin/bash
do_url() {
url="$1"
# Version 1:
# If you want to keep the folder structure from the server (similar to wget -m):
wget -q --method HEAD "$url" && mkdir -p images/"$2" && touch images/"$url"
# Version 2:
# If all the images have unique names and you want all images in a single dir
wget -q --method HEAD "$url" && touch images/"$3"
# Version 3:
# If all the images have unique names when _###.jpg is removed and you want all images in a single dir
wget -q --method HEAD "$url" && touch images/"$4"
}
export -f do_url
parallel do_url {1.}{2} {1//} {1/.}{2} {1/} :::: urls.txt ::: .jpg _{001..005}.jpg
GNU Parallel takes a few ms per job. When your jobs are this short, the overhead will affect the timing. If none of your CPU cores are running at 100% you can run more jobs in parallel:
parallel -j0 do_url {1.}{2} {1//} {1/.}{2} {1/} :::: urls.txt ::: .jpg _{001..005}.jpg
You can also "unroll" the loop. This will save 5 overheads per URL:
do_url() {
url="$1"
# Version 2:
# If all the images have unique names and you want all images in a single dir
wget -q --method HEAD "$url".jpg && touch images/"$url".jpg
wget -q --method HEAD "$url"_001.jpg && touch images/"$url"_001.jpg
wget -q --method HEAD "$url"_002.jpg && touch images/"$url"_002.jpg
wget -q --method HEAD "$url"_003.jpg && touch images/"$url"_003.jpg
wget -q --method HEAD "$url"_004.jpg && touch images/"$url"_004.jpg
wget -q --method HEAD "$url"_005.jpg && touch images/"$url"_005.jpg
}
export -f do_url
parallel -j0 do_url {.} :::: urls.txt
Finally you can run more than 250 jobs: https://www.gnu.org/software/parallel/man.html#EXAMPLE:-Running-more-than-250-jobs-workaround
You may use curl instead to check if the URLs you are parsing are there without downloading any file as such:
if curl --head --fail --silent "$url" >/dev/null; then
touch .images/"${url##*/}"
fi
Explanation:
--fail will make the exit status nonzero on a failed request.
--head will avoid downloading the file contents
--silent will avoid status or errors from being emitted by the check itself.
To solve the "looping" issue, you can do:
urls=( "${url%.jpg}"_{001..005}.jpg )
for url in "${urls[#]}"; do
if curl --head --silent --fail "$url" > /dev/null; then
touch .images/${url##*/}
fi
done
From what I can see, your question isn't really about how to use wget to test for the existence of a file, but rather on how to perform correct looping in a shell script.
Here is a simple solution for that:
urls=( "${url%.jpg}"_{001..005}.jpg )
for url in "${urls[#]}"; do
if wget -q --method=HEAD "$url"; then
touch .images/${url##*/}
fi
done
What this does is that it invokes Wget with the --method=HEAD option. With the HEAD request, the server will simply report back whether the file exists or not, without returning any data.
Of course, with a large data set this is pretty inefficient. You're creating a new connection to the server for every file you're trying. Instead, as suggested in the other answer, you could use GNU Wget2. With wget2, you can test all of these in parallel, and use the new --stats-server option to find a list of all the files and the specific return code that the server provided. For example:
$ wget2 --spider --progress=none -q --stats-site example.com/{,1,2,3}
Site Statistics:
http://example.com:
Status No. of docs
404 3
http://example.com/3 0 bytes (identity) : 0 bytes (decompressed), 238ms (transfer) : 238ms (response)
http://example.com/1 0 bytes (gzip) : 0 bytes (decompressed), 241ms (transfer) : 241ms (response)
http://example.com/2 0 bytes (identity) : 0 bytes (decompressed), 238ms (transfer) : 238ms (response)
200 1
http://example.com/ 0 bytes (identity) : 0 bytes (decompressed), 231ms (transfer) : 231ms (response)
You can even get this data printed as a CSV or JSON for easier parsing
Just loop over the names?
for uname in ${url%.jpg}_{001..005}.jpg
do
if wget --spider $uname 2>/dev/null; then
touch ./images/${uname##*/}
fi
done
You could send a command via ssh to see if the remote file exists and cat it if it does:
ssh your_host 'test -e "somefile" && cat "somefile"' > somefile
Could also try scp which supports glob expressions and recursion.

curl: (26) couldn't open file when the file is a variable

I am trying to upload a list of files to a server. This is the script that I have
files=$(shopt -s nullglob dotglob; echo /media/USB/*) > /dev/null 2>&1
if (( ${#files} ))
then
for file in $files
do
echo "Filename"
echo $file
curl -i -X POST -F files=#$file 192.168.1.122:5000/upload
done
Basically I am trying to take all of the files on a USB drive and upload them to my local server. The curl command is giving me trouble. I can move these files to drives that I mount on this system but I haven't been able to send them with the curl command. I have tried variations on #"$file" and #\"$file\" based on other related questions but I haven't been able to get this to work. However what is annoying is that when I do this:
curl -i -X POST -F files=#/absolute/path/to/my/file.txt 192.168.1.122:5000/upload
It works as I expect. How can I get this to work in my loop?
So I ended up figuring out a solution that I will share in case anyone else is having this problem. I am not sure exactly why this fixed it but I simply had to put quotes around the files=#$file in the curl command:
curl -i -X POST -F "files=#$file" 192.168.1.122:5000/upload
Leaving this here in case it is useful to someone down the line.

How to download a file using curl

I'm on mac OS X and can't figure out how to download a file from a URL via the command line. It's from a static page so I thought copying the download link and then using curl would do the trick but it's not.
I referenced this StackOverflow question but that didn't work. I also referenced this article which also didn't work.
What I've tried:
curl -o https://github.com/jdfwarrior/Workflows.git
curl: no URL specified!
curl: try 'curl --help' or 'curl --manual' for more information
.
wget -r -np -l 1 -A zip https://github.com/jdfwarrior/Workflows.git
zsh: command not found: wget
How can a file be downloaded through the command line?
The -o --output option means curl writes output to the file you specify instead of stdout. Your mistake was putting the url after -o, and so curl thought the url was a file to write to rate and hence that no url was specified. You need a file name after the -o, then the url:
curl -o ./filename https://github.com/jdfwarrior/Workflows.git
And wget is not available by default on OS X.
curl -OL https://github.com/jdfwarrior/Workflows.git
-O: This option used to write the output to a file which named like remote file we get. In this curl that file would be Workflows.git.
-L: This option used if the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3XX response code), this option will make curl redo the request on the new place.
Ref: curl man page
The easiest solution for your question is to keep the original filename. In that case, you just need to use a capital o ("-O") as option (not a zero=0!). So it looks like:
curl -O https://github.com/jdfwarrior/Workflows.git
There are several options to make curl output to a file
# saves it to myfile.txt
curl http://www.example.com/data.txt -o myfile.txt -L
# The #1 will get substituted with the url, so the filename contains the url
curl http://www.example.com/data.txt -o "file_#1.txt" -L
# saves to data.txt, the filename extracted from the URL
curl http://www.example.com/data.txt -O -L
# saves to filename determined by the Content-Disposition header sent by the server.
curl http://www.example.com/data.txt -O -J -L
# -O Write output to a local file named like the remote file we get
# -o <file> Write output to <file> instead of stdout (variable replacement performed on <file>)
# -J Use the Content-Disposition filename instead of extracting filename from URL
# -L Follow redirects

Resources