I need to POST the following curl command using Go
curl -X POST http://localhost:8001/routes/route_name -F "name=function" -F "config.file[1]=#my_file.lua"
I have been looking into multipart file upload examples, but I can't wrap my head around how to create a post with a nested field name (config.file is a config object with file as an array of strings) using CreateFormFile(key, val).
The problem I am having is according to the docs (https://pkg.go.dev/mime/multipart#Writer.WriteField) the WriteField API takes a field name and a value (path to file to post). I am not sure how to post a field name that is in a nested object e.g. CreateFormFile("config.file", filePath).
Anyone know how to convert the above curl command to Go using POST.
Related
I want to download a private repository from Gitlab using curl command (unfortunately I am obliged to do so). I know that the following works:
curl 'https://gitlab.com/api/v4/projects/2222/repository/archive?private_token=hlpat-21321' > your_project.tar.gz
where 21321 is the project id and hlpat-21321 is the access token (I put them random).
I want to do the same thing but without exposing the access token directly. An idea would be to use stdin,meaning taking the token as an input from the user in the command line. How can I do it ?
Quoting curl man pages
Starting in 7.55.0, this option can take an argument in #filename
style, which then adds a header for each line in the input file. Using
#- will make curl read the header file from stdin.
use it as header instead and pass the header from stdin
curl --header -# ...url... <<< "PRIVATE-TOKEN: private_token"
This way you dont need to pass to url
I'm using etcd v3.3 in my application and communicate with it over its WEB API.
According to documentation I don't need to explicitly create directories when putting key-value pairs on some path.
Here is an example what I doing (note that the path /base-test-path/level1/level2/level3/ does not exist yet):
curl -X PUT -d value=foo http://localhost:2379/v2/keys/base-test-path/level1/level2/level3/
The result was:
{"action":"set","node":{"key":"/base-test-path/test/test/test","value":"foo","modifiedIndex":347017,"createdIndex":347017}}
But when I try to add a new value a bit deeper into existing path, I get an error (note that the path /base-test-path/level1/level2/level3/ already exists because I run previous command before):
curl -X PUT -d value=foo http://localhost:2379/v2/keys/base-test-path/level1/level2/level3/level4
Response:
{"errorCode":104,"message":"Not a directory","cause":"/base-test-path/level1/level2/level3","index":347018}
It seems like etcd does not create directories when any part of the path already exists.
The question is: can I keep my code simple so I don't need to care about etcd directories and still be able to put values on every etcd's path I want?
It seems like etcd, when speak it with API v2, can not create directories when some key is already on the place.
E.g. when you have a key
/base-test-path/level1/somekey
You can not create such key ("Not a directory"):
/base-test-path/level1/somekey/subkey
When I switched to API v3 this problem is eliminated as it does not have directories at all. See a comment here or official documentation here.
I am attempting to convert HTML documents to PDF format using a bash script. I've found that the Sejda converter does a good job of fully rendering the charts I need, but am having some trouble using it in the console rather than the web interface. Although the documentation at https://www.sejda.com/developers gives an example of how to convert a URL, does anyone know of a similar way to convert a local file in the console?
The HTML to PDF conversion is not available via the sejda-console.
However, you can convert a local file through the sejda.com API, not only URLs, by posting the file's HTML contents.
Here's an example converting HTML code from the command line:
curl -i https://api.sejda.com/v1/tasks\
--fail --silent --show-error \
--header "Content-Type: application/json" \
--data '{"htmlCode": "<strong>HTML<\/strong> code here",
"type": "htmlToPdf" }' > converted.pdf
Disclaimer: I'm one of the developers.
I am trying to download an artifact from Jenkins where I need the latest build. If I curl jenkins.mycompany.com/view/iOS/job/build_ios/lastSuccessfulBuild/artifact/build it brings me to the page that contains the artifact I need to download, which in my case is myCompany-1234.ipa
so by changing curl to wget with --auth-no-challenge https://userIsMe:123myapikey321#jenkins.mycompany.com/view/iOS/job/build_ios/lastSuccessfulBuild/artifact/build/ it downloads the index.html file.
if I put --reject index.html it stops the index.html downloading.
if I add the name of the artifact with a wildcard like so MyCompany-*.ipait downloads a 14k file named MyCompany-*.ipa, not the MyCompany-1234.ipa I was hoping for. Keep in mind the page i am requesting only has 1 MyCompany-1234.ipa so there will never be multiple matches found
if I use a flag to pattern match -A "*.ipa" like so: wget --auth-no-challenge -A "*.ipa" https://userIsMe:123myapikey321#jenkins.mycompany.com/view/iOS/job/build_ios/lastSuccessfulBuild/artifact/build/ It still doesn't download the artifact.
It works if I perfectly input the exact url like so: wget --auth-no-challenge https://userIsMe:123myapikey321#jenkins.mycompany.com/view/iOS/job/build_ios/lastSuccessfulBuild/artifact/build/MyCompany-1234.ipa
The problem here is the .ipa is not always going to be 1234, tomorrow will be 1235, and so on. How can I either parse the html or use the wildcard correctly in wget to ensure I am always getting the latest?
NM, working with another engineer here at my work came up with a super elegant solution parsing json.
Install Chrome and get the plugin JSONView
Call the Jenkins API in your Chrome browser using https://$domain/$job/lastSuccessfulBuild/api/json
This will print out the key pair values in json. Denote your Key, for me it was number
brew install jq
In a bash script create a variable that will store the dynamic value as follows
This will store the build number to latest:
latest=$(curl --silent --show-error https://userIsMe:123myapikey321#jenkins.mycompany.com/job/build_ios/lastSuccessfulBuild/api/json | jq '.number')
Print it to screen if you would like:
echo $latest
Now with some string interpolation pass the variable for latest to your wget call:
wget --auth-no-challenge https://userIsMe:123myapikey321#jenkins.mycompany.com/view/iOS/job/build_ios/lastSuccessfulBuild/artifact/build/myCompany-$latest.ipa
Hope this helps someone out as there is limited info out there that is clear and concise, especially given that wget has been around for an eternity.
Fairly new to the world of UNIX and trying to get my head round its quirks.
I am trying to make a fairly simple shell script that uses wGet to send a XML file that has been pre-processed with Sed .
I thought about using a pipe but it caused some weird behaviour where it just outputted my XML into the console.
This is what I have so far:
File_Name=$1
echo "File name being sent to KCIM is : " $1
wget "http://testserver.com" --post-file `sed s/XXXX/$File_Name/ < template.xml` |
--header="Content-Type:text/xml"
From the output I can see I am not doing this right as its creating a badly formatted HTTP request
POST data file <?xml' missing: No such file or directory
Resolving <... failed: node name or service name not known.
wget: unable to resolve host address<'
Bonus points for explaining what the problem is as well as solution
For wget the option post-file sends the contents of the named file. In your case you seem to be passing directly the data so you probably want --post-data.
The way you are doing it right now, bash gets the output from sed and wget gets something like:
wget ... --post-file <?xml stuff stuff stuff
So wget goes looking for a file called <?xml instead of using that text verbatim.