I have a working curl command that I'd like to split out to make it easier to read.
curl d "valuea=1234&valueb=4567&valuec=87979&submit=Set" -XPOST "http://$ipa/school.cgi" > /dev/null
I've tried several ways to do this, but none seems to work.
curl -d "valuea=1234"\
-d "valueb=4567"\
-d "valuec=87979"\
-d "submit=Set"\
-XPOST "http://$ipa/school.cgi"
curl -d "valuea=1234\
valueb=4567\
valuec=87979\
submit=Set"\
-XPOST "http://$ipa/school.cgi"
Can someone advise how to do it ?
Thanks
The first approach is right, I experienced problems with spreading commands on multiple lines on some environments which were trimming whitespaces, hence it's a good idea to add a space before the backslashes:
curl -d "valuea=1234" \
-d "valueb=4567" \
-d "valuec=87979" \
-d "submit=Set" \
-XPOST "http://$ipa/school.cgi"
Eventually try it against a simple service which will inform you on what's receiving, like this one:
// echorequest.js
const http = require('http');
const hostname = '0.0.0.0';
const port = 3001;
const server = http.createServer((req, res) => {
console.log(`\n${req.method} ${req.url}`);
console.log(req.headers);
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
let data = '';
req.on('data', function(chunk) {
data += chunk
});
req.on('end', function() {
console.log('BODY: ' + data);
res.end(data + "\n");
});
});
server.listen(port, hostname, () => {
console.log(`Server running at http://localhost:${port}/`);
});
... run by node echorequest.js (& change target of the command: -XPOST "localhost:3001")
Also second approach could works by adding missing '&'.
For example
POST:
curl http://url -d \
"valuea=1234&\
valueb=4567&\
valuec=87979&\
submit=Set"
GET:
curl "http://url?valuea=1234&\
valueb=4567&\
valuec=87979&\
submit=Set"
If you need a service on the fly where to test your requests, try https://requestbin.fullcontact.com/
Related
I have a Jenkins Pipline with three stages: "Build" "Test" and "Deploy".
Here is the problem I have with "Build":
The build step ensures that structure of the Control-M Automation API json files are valid.
To do this, I utilize the $endpoint/build service provided by Automation API in the build step:
stage('Build') {
environment {
CONTROLM_CREDS = credentials('xxx')
ENDPOINT = 'xxx'
}
steps {
sh '''
username=$CONTROLM_CREDS_USR
password=$CONTROLM_CREDS_PSW
# Login
login=$(curl -k -s -H "Content-Type: application/json" -X POST -d \\{\\"username\\":\\"$username\\",\\"password\\":\\"$password\\"\\} "$ENDPOINT/session/login" )
token=$(echo ${login##*token\\" : \\"} | cut -d '"' -f 1)
# Build
curl -k -s -H "Authorization: Bearer $token" -X POST -F "definitionsFile=#ctmjobs/TestCICD.json" "$ENDPOINT/build"
curl -k -s -H "Authorization: Bearer $token" -X POST "$ENDPOINT/session/logout"
'''
}
}
<snip>
Everything works as expected, but if I intentionally put an error in the json file, Jenkins detects it and prints the error in the terminal, but "Build" still goes green. Can anyone identify the error? My expectation is that the stage "Build" goes to red as soon as there is an error in the JSON file.
Here is a Jenkins output from the terminal:
+ password=****
++ curl -k -s -H 'Content-Type: application/json' -X POST -d '{"username":"xxx","password":"****"}' /automation-api/session/login
+ login='{
"username" : "xxx",
"token" : "xxx",
"version" : "9.19.200"
}'
++ echo 'xxx",
' '"version"' : '"9.19.200"
' '}'
++ cut -d '"' -f 1
+ token=xxx
+ curl -k -s -H 'Authorization: Bearer xxx' -X POST -F definitionsFile=#ctmjobs/Test.json /automation-api/build
{
"errors" : [ {
"message" : "unknown type: Job:Dummmy",
"file" : "Test.json",
"line" : 40,
"col" : 29
}, {
"message" : "unknown type: Job:Dummmy",
"file" : "Test.json",
"line" : 63,
"col" : 29
} ]
}+ curl -k -s -H 'Authorization: Bearer xxx' -X POST /automation-api/session/logout
{
"message" : "Successfully logged out from session xxx"
} ``
Jenkins in order to consider a stage as failed, it will check the exit code of a command executed, in your case
curl -k -s -H 'Authorization: Bearer xxx' -X POST -F definitionsFile=#ctmjobs/Test.json /automation-api/build
The issue is that the curl, as a command, is executed successfully.
But the body of the curl indicates that the api call failed.
You could add --fail flag to your curl. This will force curl to return an erroneous exit code when the response status is > 400
(HTTP) Fail silently (no output at all) on server errors. This is
mostly done to enable scripts etc to better deal with failed attempts.
In normal cases when an HTTP server fails to deliver a document, it
returns an HTML document stating so (which often also describes why
and more). This flag will prevent curl from outputting that and return
error 22.
curl --show-error --fail -k -H 'Authorization: Bearer xxx' -X POST -F definitionsFile=#ctmjobs/Test.json /automation-api/build
I'm trying to insert new record into parse platform table with objectId (complaint one).
However when I do POST call:
curl -X POST \
-H "X-Parse-Application-Id: ${APPLICATION_ID}" \
-H "X-Parse-Master-Key: ${MASTER_KEY}" \
-H "Content-Type: application/json" \
-d '{"objectId": "xdH402yd9z", "field": "testData"}' $URL
the post fails with: {"code":105,"error":"objectId is an invalid field name."}
How can I insert the record with existing objectId?
Note: the data I'm inserting is basically the same one I got out of parse server prior but with few minor changes.
Thank you.
Using a custom objectId is disabled by default. You will need to enable customObjectId on the server. Depending on how you start your server you can try something like below in your app.js:
const api = new ParseServer({
databaseURI: databaseUri || 'mongodb://localhost:27017/dev',
cloud: process.env.PARSE_SERVER_CLOUD || __dirname + '/cloud/main.js',
appId: process.env.PARSE_SERVER_APPLICATION_ID || 'myAppId',
masterKey: process.env.PARSE_SERVER_MASTER_KEY || '',
//readOnlyMasterKey: process.env.PARSE_SERVER_READ_ONLY_MASTER_KEY,
encryptionKey: process.env.PARSE_SERVER_ENCRYPTION_KEY,
objectIdSize: parseInt(process.env.PARSE_SERVER_OBJECT_ID_SIZE) || 10,
serverURL: process.env.PARSE_SERVER_URL || 'http://localhost:' +process.env.PORT + '/parse',
publicServerURL: process.env.PARSE_PUBLIC_SERVER_URL || 'http://localhost:' +process.env.PORT + '/parse',
allowCustomObjectId: true, // Here's what you need to enable
You can see a complete example here: https://github.com/netreconlab/parse-hipaa/blob/parse-swift/parse/index.js
You can also set the environment variable:
PARSE_SERVER_ALLOW_CUSTOM_OBJECT_ID = 1
I am hitting a wall trying to build a script to save myself quite a good bit of time. I am working in a system in which I need to run a curl POST against a list of values. The list is about 400 lines long, so I am hoping to find a way of scripting this in Bash instead of running that call manually for each entry. Below are some details to help understand what I'm trying to accomplish:
If I were to be doing this task manually, each call would be formatted like the below:
curl -X POST --header "Content-Type: application/json" -v 'http://www.website.com:8081/cc/membership' -d #json_payload.json
This points to my JSON in the listed file which shows as the below:
{
"groupId": "12345678987654321",
"type": "serial",
"memberInfo": "apple"
}
If I run the above, the call works, and the expected operation occurs. The issue is that I need to run this against roughly 400 values for that "memberInfo" field in the JSON payload. I'm trying to identify a way to run a single bash script, which will run this curl command over and over, and update the JSON payload to use each row in a file as the below:
memberList.txt
apple
banana
peach
pear
orange
.
.
And then maybe insert a pointer in my JSON for the "memberInfo" field over to this file.
Any and all help/suggestions are greatly appreciated!
.
This will do as you intend. Its a little convoluted but you might polish it a bit.
#!/bin/bash
function getString(){
echo $1 | python3 -c '
import json
import sys
payload="""
{
"groupId": "12345678987654321",
"type": "serial",
"memberInfo": ""
}
"""
obj = json.loads(payload)
obj["memberInfo"] = sys.stdin.read().strip()
print(json.dumps(obj, indent = " "))
'
}
while read member
do
getString "$member" > json_payload.json
curl -X POST --header "Content-Type: application/json" -v 'http://www.website.com:8081/cc/membership' -d #json_payload.json
done <<< "$( cat fruits.txt )"
Hope it helps!
while read member; do
curl -X POST --header "Content-Type: application/json" -v 'http://www.website.com:8081/cc/membership' -d '{"groupId": "12345678987654321","type": "serial","memberInfo": "$member"}'
done <members.txt
This will work if you only care about the memberInfo field, another method could be writing your json line by line to payloads.txt file.
payloads.txt
{"groupId": "12345678987455432","type": "stereo","memberInfo": "apple"}
{"groupId": "34532453453453465","type": "serial","memberInfo": "banana"}
...
then use this as the script
while read payload; do
curl -X POST --header "Content-Type: application/json" -v 'http://www.website.com:8081/cc/membership' -d '$payload'
done <payloads.txt
here is a collection of bash scripting common uses I've had to use
https://github.com/felts94/advanced-bash/blob/master/bash_learn.sh
I want to send an email with attached pdf file through the Sparkpost API with curl post.
To insert the pdf I use (my test.pdf is ~ 200KB)
"data":"'$(cat test.pdf} | base64 --wrap=0)'"
But somehow this doesn't work out showing the following error:
/usr/bin/curl: Die Argumentliste ist zu lang (original)
/usr/bin/curl: Argument list is too long
EDIT:
curl command
curl -X POST https://api.eu.sparkpost.com/api/v1/transmissions -H 'Authorization: <APIKEY>' -H 'Content-Type: application/json' -d '{
"options":{
"open_tracking":false,
"click_tracking":false,
"inline_css":false
},
"recipients":[
{
"address":{
"email":"user#domain.tld",
"name":"user"
}
}
],
"content":{
"from":{
"name":"sender",
"email":"sender#domain.tld"
},
"reply_to":"replyto#domain.tld",
"subject":"subject",
"text":"textbody",
"attachments":[
{
"name":"attachmentname.pdf",
"type":"application/pdf",
"data":"'$(cat test.pdf | base64 --wrap=0)'"
}
]
}
}'
This is coming up because you are trying to pass the entirety of the base64'd content on the command line. curl has the ability to load in data to POST from a file, which I'd recommend doing. More information can be found in the man page, but the basic format is this:
curl -X POST -d #filename.txt https://website.com/path
According to the curl manual, the -F option allows you to encode a file for base64, but limits the output to 76 characters.
Ex:
-F '=#localfile;encoder=base64'
I'm currently trying to make a DDNS script that interacts with the Cloudflare API to catch changes in my ip address and automatically fix the ip address change for my web server. Everything is working correctly so far except I can't get $IP to be put properly in the curl statement. I first run a python script from within the bash script to get the ip address, then run the curl statement in the bash script. Here's what the python script looks like (it returns an ip address like "1.1.1.1" with quotations included because the curl command requires the quotations)
#!/usr/bin/python3
import subprocess as sp
def main():
command = "dig +short myip.opendns.com #resolver1.opendns.com";
ip = sp.check_output(command, shell=True).decode('utf-8').strip('\n');
ip_tmp = ip;
ip_tmp = '"' + ip + '"';
ip = ip_tmp;
print(ip);
if __name__ == "__main__":
main();
And the bash script looks like this:
#!/bin/bash
IP=$("./getIP.py")
curl -X PUT "https://api.cloudflare.com/client/v4/zones/zone_id/dns_records/dns_id" \
-H "X-Auth-Email: example.com" \
-H "X-Auth-Key: authkey" \
-H "Content-Type: application/json" \
--data '{"type":"A","name":"example.com","content":$IP,"ttl":120,"proxied":true}'
I've tried to have the python script only return numbers and then added the quotations in the bash script and now vice versa and I can't seem to get it to work. The last line should end up looking like this once the variable replaces with quotations around the ip address:
'{"type":"A","name":"example.com","content":"127.0.0.1","ttl":120,"proxied":true}'
The single quotes around your json structure prevent the variable from expanding.
You have a few options that are readily available.
Ugly quote escaping inside/around your json.
"{\"type\":\"A\",\"name\":\"example.com\",\"content\":$IP,\"ttl\":120,\"proxied\":true}"
Having the python write this data to a file and telling curl to use that file for the source of the post data.
curl -X PUT "https://api.cloudflare.com/client/v4/zones/zone_id/dns_records/dns_id" \
-H "X-Auth-Email: example.com" \
-H "X-Auth-Key: authkey" \
-H "Content-Type: application/json" \
--data #file_you_wrote_your_json_to.json
Using the python requests or urllib modules to issue the request to cloud flare.
Update your main() function to return the IP instead of print it.
my_ip = main()
url = "https://api.cloudflare.com/client/v4/zones/zone_id/dns_records/dns_id"
myheaders = {
"X-Auth-Email": "example.com",
"X-Auth-Key": "authkey",
"Content-Type": "application/json"
}
myjson = {
"type":"A",
"name":"example.com",
"content":my_ip,
"ttl":120,
"proxied":true
}
requests.put(url, headers=myheaders, data=myjson)
Better yet, just do it in bash. Cloudflare DDNS on github.
One shot to fetch the dynamic A-record ID:
curl -X GET "https://api.cloudflare.com/client/v4/zones/**Zone ID** \
/dns_records?type=A&name=dynamic" \
-H "Host: api.cloudflare.com" \
-H "User-Agent: ddclient/3.9.0" \
-H "Connection: close" \
-H "X-Auth-Email: example#example.com" \
-H "X-Auth-Key: "**Authorization key**" \
-H "Content-Type: application/json"
Cron job (* * * * *) to set the dynamic A-record:
#/usr/bin/env sh
AUTH_EMAIL=example#example.com
AUTH_KEY=** CF Authorization key **
ZONE_ID=** CF Zone ID **
A_RECORD_NAME="dynamic"
A_RECORD_ID=** CF A-record ID from cloudflare-dns-id.sh **
IP_RECORD="/tmp/ip-record"
RECORDED_IP=`cat $IP_RECORD`
PUBLIC_IP=$(curl --silent https://api.ipify.org) || exit 1
if [ "$PUBLIC_IP" = "$RECORDED_IP" ]; then
exit 0
fi
echo $PUBLIC_IP > $IP_RECORD
RECORD=$(cat <<EOF
{ "type": "A",
"name": "$A_RECORD_NAME",
"content": "$PUBLIC_IP",
"ttl": 180,
"proxied": false }
EOF
)
curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID \
/dns_records/$A_RECORD_ID" \
-X PUT \
-H "Content-Type: application/json" \
-H "X-Auth-Email: $AUTH_EMAIL" \
-H "X-Auth-Key: $AUTH_KEY" \
-d "$RECORD"