I'm trying to insert new record into parse platform table with objectId (complaint one).
However when I do POST call:
curl -X POST \
-H "X-Parse-Application-Id: ${APPLICATION_ID}" \
-H "X-Parse-Master-Key: ${MASTER_KEY}" \
-H "Content-Type: application/json" \
-d '{"objectId": "xdH402yd9z", "field": "testData"}' $URL
the post fails with: {"code":105,"error":"objectId is an invalid field name."}
How can I insert the record with existing objectId?
Note: the data I'm inserting is basically the same one I got out of parse server prior but with few minor changes.
Thank you.
Using a custom objectId is disabled by default. You will need to enable customObjectId on the server. Depending on how you start your server you can try something like below in your app.js:
const api = new ParseServer({
databaseURI: databaseUri || 'mongodb://localhost:27017/dev',
cloud: process.env.PARSE_SERVER_CLOUD || __dirname + '/cloud/main.js',
appId: process.env.PARSE_SERVER_APPLICATION_ID || 'myAppId',
masterKey: process.env.PARSE_SERVER_MASTER_KEY || '',
//readOnlyMasterKey: process.env.PARSE_SERVER_READ_ONLY_MASTER_KEY,
encryptionKey: process.env.PARSE_SERVER_ENCRYPTION_KEY,
objectIdSize: parseInt(process.env.PARSE_SERVER_OBJECT_ID_SIZE) || 10,
serverURL: process.env.PARSE_SERVER_URL || 'http://localhost:' +process.env.PORT + '/parse',
publicServerURL: process.env.PARSE_PUBLIC_SERVER_URL || 'http://localhost:' +process.env.PORT + '/parse',
allowCustomObjectId: true, // Here's what you need to enable
You can see a complete example here: https://github.com/netreconlab/parse-hipaa/blob/parse-swift/parse/index.js
You can also set the environment variable:
PARSE_SERVER_ALLOW_CUSTOM_OBJECT_ID = 1
Related
I'm writing CURL COMMAND
HTTP_Method : POST
API : api/databricks/query
Request body {"query":" GRANT SELECT,READ_METADATA,USAGE on DATABASE `DB` to `userID` " }
The query,I have execute using POSTMAN it's working
curl --location --request POST 'https://<databick_workspace-url>/api/sql/databricks/query' \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data-raw '{"query":"GRANT SELECT,READ_METADATA,USAGE on DATABASE `DB-NAME` to `userID` " }
# Working, And user can access DB
I believe the problem is with backticks,
// Function
function grantReadAccess() {
local path="/api/sql/databricks/query"
local url="https://<databricks-workspace-url>/${path}"
printf 'curl %q' "${url}"
local DATABASENAME="DB-NAME"
local userID="userID"
local content="GRANT SELECT,READ_METADATA,USAGE on DATABASE \`${DATABASENAME}\` to \`${userID}\` "
echo "------------$content"
// OUTPUT Here : GRANT SELECT,READ_METADATA,USAGE on DATABASE `DB-NAME` to `userID`
// Same I have to pass in request body
local permissionToGroup=$(
curl -X POST "${url}" -H "Authorization: Bearer ${authtoken}" -H "Content-Type: application/json" -d '{ "query": \"'"${content}"'\" } ')
//Required in this {"query":"GRANT SELECT,READ_METADATA,USAGE on DATABASE `DB` to `userID` " }
echo "${permissionToGroup}"
}
// Tried In Postman insted of using backtick
1. Single Quote "'" {"query":"GRANT SELECT,READ_METADATA,USAGE on DATABASE 'DB' to 'userID' "
2. Blacket "(" {"query":"GRANT SELECT,READ_METADATA,USAGE on DATABASE (DB) to (userID) "
Error
{
"query": "grant SELECT, READ_METADATA, USAGE on DATABASE 'DB' to 'useID' ",
"data": null,
"error": " org.apache.spark.sql.catalyst.parser.ParseException: \nOperation not allowed: grant(line 1, pos 0)\n\n== SQL ==\ngrant SELECT, READ_METADATA, USAGE on DATABASE 'DB"
}
The above function is not working,
Always use a tool that understands JSON when you need to build JSON. In this context, the widely-accepted tool is for the job is jq.
curl -X POST "${url}" \
-H "Authorization: Bearer ${authtoken}" \
-H "Content-Type: application/json" \
-d "$(jq -n --arg content "$content" '{"query": $content}')"
Your question is not very clear, but I'll guess you're having problems in this part:
-d '{ "query": \"'"${content}"'\" } '
That will get you literal backslashes before the double quotes around your content, since backslashes inside single quotes are inserted literally. You could swap that for -d '{ "query": "'"${content}"'" }' or -d "{ \"query\": \"${content}\" }"
I'm currently trying to make a DDNS script that interacts with the Cloudflare API to catch changes in my ip address and automatically fix the ip address change for my web server. Everything is working correctly so far except I can't get $IP to be put properly in the curl statement. I first run a python script from within the bash script to get the ip address, then run the curl statement in the bash script. Here's what the python script looks like (it returns an ip address like "1.1.1.1" with quotations included because the curl command requires the quotations)
#!/usr/bin/python3
import subprocess as sp
def main():
command = "dig +short myip.opendns.com #resolver1.opendns.com";
ip = sp.check_output(command, shell=True).decode('utf-8').strip('\n');
ip_tmp = ip;
ip_tmp = '"' + ip + '"';
ip = ip_tmp;
print(ip);
if __name__ == "__main__":
main();
And the bash script looks like this:
#!/bin/bash
IP=$("./getIP.py")
curl -X PUT "https://api.cloudflare.com/client/v4/zones/zone_id/dns_records/dns_id" \
-H "X-Auth-Email: example.com" \
-H "X-Auth-Key: authkey" \
-H "Content-Type: application/json" \
--data '{"type":"A","name":"example.com","content":$IP,"ttl":120,"proxied":true}'
I've tried to have the python script only return numbers and then added the quotations in the bash script and now vice versa and I can't seem to get it to work. The last line should end up looking like this once the variable replaces with quotations around the ip address:
'{"type":"A","name":"example.com","content":"127.0.0.1","ttl":120,"proxied":true}'
The single quotes around your json structure prevent the variable from expanding.
You have a few options that are readily available.
Ugly quote escaping inside/around your json.
"{\"type\":\"A\",\"name\":\"example.com\",\"content\":$IP,\"ttl\":120,\"proxied\":true}"
Having the python write this data to a file and telling curl to use that file for the source of the post data.
curl -X PUT "https://api.cloudflare.com/client/v4/zones/zone_id/dns_records/dns_id" \
-H "X-Auth-Email: example.com" \
-H "X-Auth-Key: authkey" \
-H "Content-Type: application/json" \
--data #file_you_wrote_your_json_to.json
Using the python requests or urllib modules to issue the request to cloud flare.
Update your main() function to return the IP instead of print it.
my_ip = main()
url = "https://api.cloudflare.com/client/v4/zones/zone_id/dns_records/dns_id"
myheaders = {
"X-Auth-Email": "example.com",
"X-Auth-Key": "authkey",
"Content-Type": "application/json"
}
myjson = {
"type":"A",
"name":"example.com",
"content":my_ip,
"ttl":120,
"proxied":true
}
requests.put(url, headers=myheaders, data=myjson)
Better yet, just do it in bash. Cloudflare DDNS on github.
One shot to fetch the dynamic A-record ID:
curl -X GET "https://api.cloudflare.com/client/v4/zones/**Zone ID** \
/dns_records?type=A&name=dynamic" \
-H "Host: api.cloudflare.com" \
-H "User-Agent: ddclient/3.9.0" \
-H "Connection: close" \
-H "X-Auth-Email: example#example.com" \
-H "X-Auth-Key: "**Authorization key**" \
-H "Content-Type: application/json"
Cron job (* * * * *) to set the dynamic A-record:
#/usr/bin/env sh
AUTH_EMAIL=example#example.com
AUTH_KEY=** CF Authorization key **
ZONE_ID=** CF Zone ID **
A_RECORD_NAME="dynamic"
A_RECORD_ID=** CF A-record ID from cloudflare-dns-id.sh **
IP_RECORD="/tmp/ip-record"
RECORDED_IP=`cat $IP_RECORD`
PUBLIC_IP=$(curl --silent https://api.ipify.org) || exit 1
if [ "$PUBLIC_IP" = "$RECORDED_IP" ]; then
exit 0
fi
echo $PUBLIC_IP > $IP_RECORD
RECORD=$(cat <<EOF
{ "type": "A",
"name": "$A_RECORD_NAME",
"content": "$PUBLIC_IP",
"ttl": 180,
"proxied": false }
EOF
)
curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID \
/dns_records/$A_RECORD_ID" \
-X PUT \
-H "Content-Type: application/json" \
-H "X-Auth-Email: $AUTH_EMAIL" \
-H "X-Auth-Key: $AUTH_KEY" \
-d "$RECORD"
I have a working curl command that I'd like to split out to make it easier to read.
curl d "valuea=1234&valueb=4567&valuec=87979&submit=Set" -XPOST "http://$ipa/school.cgi" > /dev/null
I've tried several ways to do this, but none seems to work.
curl -d "valuea=1234"\
-d "valueb=4567"\
-d "valuec=87979"\
-d "submit=Set"\
-XPOST "http://$ipa/school.cgi"
curl -d "valuea=1234\
valueb=4567\
valuec=87979\
submit=Set"\
-XPOST "http://$ipa/school.cgi"
Can someone advise how to do it ?
Thanks
The first approach is right, I experienced problems with spreading commands on multiple lines on some environments which were trimming whitespaces, hence it's a good idea to add a space before the backslashes:
curl -d "valuea=1234" \
-d "valueb=4567" \
-d "valuec=87979" \
-d "submit=Set" \
-XPOST "http://$ipa/school.cgi"
Eventually try it against a simple service which will inform you on what's receiving, like this one:
// echorequest.js
const http = require('http');
const hostname = '0.0.0.0';
const port = 3001;
const server = http.createServer((req, res) => {
console.log(`\n${req.method} ${req.url}`);
console.log(req.headers);
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
let data = '';
req.on('data', function(chunk) {
data += chunk
});
req.on('end', function() {
console.log('BODY: ' + data);
res.end(data + "\n");
});
});
server.listen(port, hostname, () => {
console.log(`Server running at http://localhost:${port}/`);
});
... run by node echorequest.js (& change target of the command: -XPOST "localhost:3001")
Also second approach could works by adding missing '&'.
For example
POST:
curl http://url -d \
"valuea=1234&\
valueb=4567&\
valuec=87979&\
submit=Set"
GET:
curl "http://url?valuea=1234&\
valueb=4567&\
valuec=87979&\
submit=Set"
If you need a service on the fly where to test your requests, try https://requestbin.fullcontact.com/
In Parse.com, Is it possible to receive entire row in the response after making Create Object call instead of receiving the objectId alone.
For example below command
curl -X POST \
-H "X-Parse-Application-Id: qEXLVybHgoqX79zKIpjA2wIGL5suvbVyZDA9Lt4A" \
-H "X-Parse-REST-API-Key: RSJfkl80UCLC24TYqaUjKqJmtoFtRojNRXTVPxMj" \
-H "Content-Type: application/json" \
-d '{"score":1337,"playerName":"Sean Plott","cheatMode":false}' \
https://api.parse.com/1/classes/GameScore
returns output response as
{
"createdAt": "2011-08-20T02:06:57.931Z",
"objectId": "Ed1nuqPvcm"
}
Instead of this is it possible to retrieve entire object in the response?
No, unfortunately the API does not support that (which I have missed on several occasions). You will need to re-fetch the object after it is saved.
I'm trying to do the following POST to Parse Cloud using the Curb gem
curl -X POST \
-H "X-Parse-Application-Id: PARSE_APP_ID" \
-H "X-Parse-REST-API-Key: PARSE_API_KEY" \
-H "Content-Type: image/jpeg" \
--data-binary '#myPicture.jpg' \
https://api.parse.com/1/files/pic.jpg
with this:
curl = Curl::Easy.new("https://api.parse.com/1/files/lion.jpg")
curl.multipart_form_post = true
curl.headers["X-Parse-Application-Id"] = PARSE_APP_ID
curl.headers["X-Parse-REST-API-Key"] = PARSE_API_KEY
curl.headers["Content-Type"] = "image/jpg"
res = curl.http_post(Curl::PostField.file('file', image.path))
Upload goes through with a 201, but it doesn't seem like the file makes it up to the server correctly.
Figured it out:
curl = Curl::Easy.new("https://api.parse.com/1/files/lion.jpg")
curl.headers["X-Parse-Application-Id"] = PARSE_APP_ID
curl.headers["X-Parse-REST-API-Key"] = PARSE_API_KEY
curl.headers["Content-Type"] = "image/jpeg"
data = File.read('/Users/haider/Pictures/lion.jpg')
curl.post_body=data
curl.http_post
puts curl.body_str