Testing REST routes with cURL - data not saving - mean-stack

I am running through the tutorials in Open REST Routes but I am now stuck at posting data to the mongodb posts collection.
When I cURL it executes without error but my title and link are not saved and the upvotes defaults to 0 as defined in the schema.
My Post schema is:
var mongoose = require('mongoose');
var PostSchema = new mongoose.Schema({
title: String,
link: String,
upvotes: {type: Number, default: 0},
comments: [{type: mongoose.Schema.Types.ObjectId, ref: 'Comment'}]
});
mongoose.model('Post', PostSchema);
I get the result below:
curl http://localhost:3000/posts -d '{"title":"Go Bigdadi! Nicely done!!!","link":"http://www.foo.com","upvotes":2}'
{"upvotes":0,"comments":[],"_id":"5b740c875bdf6a326c677cd3","__v":0}`
Please suggest where I am likely to be messing it up.

I think I have figured it out....
I tried changing the parameters round a bit and was able to get the expected result.
I tried:
curl http://localhost:3000/posts -i -X POST -d "title=Bigdadi Rules Posts&link=http://www.foo.com&upvotes=2"
It works!!!

Related

Google Cloud DLP - CSV inspection

I'm trying to inspect a CSV file and there are no findings being returned (I'm using the EMAIL_ADDRESS info type and the addresses I'm using are coming up with positive hits here: https://cloud.google.com/dlp/demo/#!/). I'm sending the CSV file into inspect_content with a byte_item as follows:
byte_item: {
type: :CSV,
data: File.open('/xxxxx/dlptest.csv', 'r').read
}
In looking at the supported file types, it looks like CSV/TSV files are inspected via Structured Parsing.
For CSV/TSV does that mean one can't just sent in the file, and needs to use the table attribute instead of byte_item as per https://cloud.google.com/dlp/docs/inspecting-structured-text?
What about for XSLX files for example? They're an unspecified file type so I tried with a configuration like so, but it still returned no findings:
byte_item: {
type: :BYTES_TYPE_UNSPECIFIED,
data: File.open('/xxxxx/dlptest.xlsx', 'rb').read
}
I'm able to do inspection and redaction with images and text fine, but having a bit of a problem with other file types. Any ideas/suggestions welcome! Thanks!
Edit: The contents of the CSV in question:
$ cat ~/Downloads/dlptest.csv
dylans#gmail.com,anotehu,steve#example.com
blah blah,anoteuh,
aonteuh,
$ file ~/Downloads/dlptest.csv
~/Downloads/dlptest.csv: ASCII text, with CRLF line terminators
The full request:
parent = "projects/xxxxxxxx/global"
inspect_config = {
info_types: [{name: "EMAIL_ADDRESS"}],
min_likelihood: :POSSIBLE,
limits: { max_findings_per_request: 0 },
include_quote: true
}
request = {
parent: parent,
inspect_config: inspect_config,
item: {
byte_item: {
type: :CSV,
data: File.open('/xxxxx/dlptest.csv', 'r').read
}
}
}
dlp = Google::Cloud::Dlp.dlp_service
response = dlp.inspect_content(request)
The CSV file I was testing with was something I created using Google Sheets and exported as a CSV, however, the file showed locally as a "text/plain; charset=us-ascii". I downloaded a CSV off the internet and it had a mime of "text/csv; charset=utf-8". This is the one that worked. So it looks like my issue was specifically due the file being an incorrect mime type.
xlsx is not yet supported. Coming soon. (Maybe that part of the question should be split out from the CSV debugging issue.)

Curl/GraphQL command failing with 200

I am trying to write a shell script that executes a curl against a GraphQL API and I've never interacted with GQL before. I am getting some strange errors and although I understand this community doesn't have access to the GQL server I was hoping someone could take a look at the script and make sure I'm not doing anything flagrantly wrong syntax-wise (both in the shell script layer as well as the GQL query itself).
My script:
#!/bin/bash
BSEE_WEB_SERVER_DNS=https://mybsee.example.com
BSEE_API_KEY=abc123
siteId=1
scanConfigId=456
runScanQuery='mutation CreateScheduleItem { create_schedule_item(input: {site_id: "$siteId" scan_configuration_ids: "$scanConfigId"}) { schedule_item { id } } }'
runScanVariables='{ "input": "site_id": $scanId }}'
runScanOperationName='CreateScheduleItem'
curl -i --request POST \
--url $BSEE_WEB_SERVER_DNS/graphql/v1 \
--header "Authorization: $BSEE_API_KEY" \
--header 'Content-Type: application/json' \
--data '{"query":"$runScanQuery","variables":{$runScanVariables},"operationName":"${runScanOperationName}"}'
And the output when I run the script off the terminal:
HTTP/2 200
<OMITTED RESPONSE HEADERS>
{"errors":[{"message":"Invalid JSON : Unexpected character (\u0027$\u0027 (code 36)): was expecting double-quote to start field name, Line 1 Col 38","extensions":{"code":3}}]}%
I am omitting the HTTP response headers for security and brevity reasons.
I am wondering if my use of quotes/double-quotes is somehow wrong, or if there is anything about the nature of the GQL query itself (via curl) that looks off to anyone.
I verified with the team that manages the server that the HTTP 200 OK response code is correct. 200 shows that the request succeeded to the GQL API, but that GQL is responding with this error to indicate the query itself is incorrect.
We need to modify the GraphQL bits and fix the bash string quoting.
runScanQuery GraphQL operation
Fix the GraphQL syntax. Use a GraphQL operation name CreateScheduleItem with variables $site_id in the arguments input: { site_id: $siteId, scan_configuration_ids: $scanConfigId:
mutation CreateScheduleItem($site_id: String!, $scanConfigId: String!) {
create_schedule_item(
input: { site_id: $siteId, scan_configuration_ids: $scanConfigId }
) {
schedule_item {
id
}
}
}
runScanVariables: JSON
Our mutation expects two variables, which GraphQL will substitute into CreateScheduleItem($site_id: String!, $scanConfigId: String!). Provide the GraphQL variables as JSON. Here is the expected output after bash variable substitution:
{ "$site_id": "1", "$scanConfigId": "456" }
Get the bash quoting right
Finally, translate the inputs into bash-friendly syntax:
runScanQuery='mutation CreateScheduleItem($site_id: String!, $scanConfigId: String!) { create_schedule_item(input: {site_id: $siteId scan_configuration_ids: $scanConfigId}) { schedule_item { id } } }'
runScanVariables='{"$site_id":"'"$siteId"'","$scanConfigId":"'"$scanConfigId"'"}' # no spaces!
runScanOperationName='CreateScheduleItem'
data='{"query":"'"$runScanQuery"'","variables":'$runScanVariables',"operationName":"'"$runScanOperationName"'"}'
Check our bash formats. Paste the terminal output into a code-aware editor like VSCode. Expect the editor to parse the output correctly.
echo $runScanQuery # want string in graphql format
echo $runScanVariables # want JSON
echo $data # want JSON
Edit: add a public API example
Here's a complete working example using the public Star Wars API:
#!/bin/bash
filmId=1
data='{"query":"query Query($filmId: ID) { film(filmID: $filmId) { title }}","variables":{"filmId":"'"$filmId"'"}}'
curl --location --request POST 'https://swapi-graphql.netlify.app/.netlify/functions/index' \
--header 'Content-Type: application/json' \
--data "$data"
Responds with {"data":{"film":{"title":"A New Hope"}}}.
In GraphQL it's normal to always have 200 status code; client must check response body searching for failures.
The reason is simple: In REST, http is part of the protocol and status code has semantics but in GraphQL http is not part of the protocol, you can have GraphQL over serveral transport protocols:
http: typical scenario docs
WebSocket: does not provide any "status code like" payload. sample
MQTT: does not provide any "status code like" payload
...
The only way that server tells you something (even failures) is the body.
In your case I suggest you jq to parse json via bash script searching error property.
Your error is completely unrelated to GraphQL. You really have wrong JSON.
Error message says Unexpected character (\u0027$\u0027 (code 36)): was expecting double-quote to start field name, Line 1 Col 38",
You can replace escaped \u0027 with apostrophe and you will get
Unexpected character ('$' (code 36)): was expecting double-quote to start field name, Line 1 Col 38",
So it hates dollar sign at position 38 in what you send as data to curl
data='{"query":"'"$runScanQuery"'","variables":'$runScanVariables'
^
this
First - all field names and values in JSON should be wrapped with double quotes, not single.
Second - if you want curl to expand env variable, put it to double quotes, not single.

How to delete all attributes from the schema in solr?

Deleting all documents from solr is
curl http://localhost:8983/solr/trans/update?commit=true -d "<delete><query>*:*</query></delete>"
Adding a (static) attribute to the schema is
curl -X POST -H 'Content-type:application/json' --data-binary '{ "add-field":{"name":"trans","type":"string","stored":true, "indexed":true},}' http://localhost:8983/solr/trans/schema
Deleting one attribute is
curl -X POST -H 'Content-type:application/json' -d '{ "delete-field":{"name":"trans"}}' http://arteika:8983/solr/trans/schema
Is there a way to delete all attributes from the schema?
At least in version 6.6 of the Schema API and up to the current version 7.5 of it, you can pass multiple commands in a single post (see 6.6 and 7.5 documenation, respectively). There are multiple accepted formats, but the most intuitive one (I think) is just passing an array for the action you want to perform:
curl -X POST -H 'Content-type: application/json' -d '{
"delete-field": [
{"name": "trans"},
{"name": "other_field"}
]
}' 'http://arteika:8983/solr/trans/schema'
So. How do we obtain the names of the fields we want to delete? That can be done by querying the Schema:
curl -X GET -H 'Content-type: application/json' 'http://arteika:8983/solr/trans/schema'
In particular, the copyFields, dynamicFields and fields keys in the schema object in the response.
I automated clearing all copy field rules, dynamic field rules and fields as follows. You can of course use any kind of script that is available to you. I used Python 3 (might work with Python 2, I did not test that).
import json
import requests
# load schema information
api = 'http://arteika:8983/solr/trans/schema'
r = requests.get(api)
# delete copy field rules
names = [(o['source'], o['dest']) for o in r.json()['schema']['copyFields']]
payload = {'delete-copy-field': [{'source': name[0], 'dest': name[1]} for name in names]}
requests.post(api, data = json.dumps(payload),
headers = {'Content-type': 'application/json'})
# delete dynamic fields
names = [o['name'] for o in r.json()['schema']['dynamicFields']]
payload = {'delete-dynamic-field': [{'name': name} for name in names]}
requests.post(api, data = json.dumps(payload),
headers = {'Content-type': 'application/json'})
# delete fields
names = [o['name'] for o in r.json()['schema']['fields']]
payload = {'delete-field': [{'name': name} for name in names]}
requests.post(api, data = json.dumps(payload),
headers = {'Content-type': 'application/json'})
Just a note: I received status 400 responses at first, with null error messages. Had a bit of a hard time figuring out how to fix those, so I'm sharing what worked for me. Changing the default of updateRequestProcessorChain in solrconfig.xml to false (default="${update.autoCreateFields:false}") and restarting the Solr service made those errors go away for me. The fields I was deleting were created automatically, that may have something to do with that.

Specify query parameter for a single HTTP method

To illustrate my problem, I made a condensed example from the Apiary.io blueprint tutorial.
FORMAT: 1A
# Gist Fox API
# Group Gist
Gist-related resources of *Gist Fox API*.
## Gists Collection [/gists{?since}]
### List All Gists [GET]
+ Parameters
+ since (optional, string) ... Timestamp in ISO 8601 format: `YYYY-MM-DDTHH:MM:SSZ` Only gists updated at or after this time are returned.
+ Response 200
{
items: []
}
### Create a Gist [POST]
To create a new Gist simply provide a JSON hash of the *description* and *content* attributes for the new Gist.
+ Request (application/json)
{
"description": "Description of Gist",
"content": "String content"
}
+ Response 201
{
}
Then in my apiary documentation I get the following:
GET /gists{?since}
POST /gists{?since}
However, for me it makes sense to have the since query parameter only for the GET request. Unfortunately I didn't find a way to achieve this result:
GET /gists{?since}
POST /gists
Is it something possible?
Update
(Thursday, 23 Oct 2014)
The fix has been deployed; could you please give it a try and let me know if everything works as expected?
The bad news
It is our (Apiary) bug and you're not doing anything wrong :-(
The good news
It is a known bug we are currently working on and it is going to be fixed with the end of this week (Sunday, 19 Oct 2014) :-)

Couchdb view Queries

Could you please help me in creating a view. Below is the requirement
select * from personaccount where name="srini" and user="pup" order by lastloggedin
I have to send name and user as input to the view and the data should be sorted by lastloggedin.
Below is the view I have created but it is not working
{
"language": "javascript",
"views": {
"sortdatetimefunc": {
"map": "function(doc) {
emit({
lastloggedin: doc.lastloggedin,
name: doc.name,
user: doc.user
},doc);
}"
}
}
}
And this the curl command iam using:
http://uta:password#localhost:5984/personaccount/_design/checkdatesorting/_view/sortdatetimefunc?key={\"name:srini\",\"user:pup\"}
My Questions are
As sorting will be done on key and I want it on lastloggedin so I have given that also in emit function.
But iam passing name and user only as parameters. Do we need to pass all the parameters which we give it in key?
First of all I want to convey to you for the reply, I have done the same and iam getting errors. Please help
Could you please try this on your PC, iam posting all the commands :
curl -X PUT http://uta:password#localhost:5984/person-data
curl -X PUT http://uta:password#localhost:5984/person-data/srini -d '{"Name":"SRINI", "Idnum":"383896", "Format":"NTSC", "Studio":"Disney", "Year":"2009", "Rating":"PG", "lastTimeOfCall": "2012-02-08T19:44:37+0100"}'
curl -X PUT http://uta:password#localhost:5984/person-data/raju -d '{"Name":"RAJU", "Idnum":"456787", "Format":"FAT", "Studio":"VFX", "Year":"2010", "Rating":"PG", "lastTimeOfCall": "2012-02-08T19:50:37+0100"}'
curl -X PUT http://uta:password#localhost:5984/person-data/vihar -d '{"Name":"BALA", "Idnum":"567876", "Format":"FAT32", "Studio":"YELL", "Year":"2011", "Rating":"PG", "lastTimeOfCall": "2012-02-08T19:55:37+0100"}'
Here's the view as you said I created :
{
"_id": "_design/persondestwo",
"_rev": "1-0d3b4857b8e6c9e47cc9af771c433571",
"language": "javascript",
"views": {
"personviewtwo": {
"map": "function (doc) {\u000a emit([ doc.Name, doc.Idnum, doc.lastTimeOfCall ], null);\u000a}"
}
}
}
I have fired this command from curl command :
curl -X GET http://uta:password#localhost:5984/person-data/_design/persondestwo/_view/personviewtwo?startkey=["SRINI","383896"]&endkey=["SRINI","383896",{}]descending=true&include_docs=true
I got this error :
[4] 3000
curl: (3) [globbing] error: bad range specification after pos 99
[5] 1776
[6] 2736
[3] Done descending=true
[4] Done(3) curl -X GET http://uta:password#localhost:5984/person-data/_design/persondestwo/_view/personviewtwo?startkey=["SRINI","383896"]
[5] Done endkey=["SRINI","383896"]
I am not knowing what this error is.
I have also tried passing the parameters the below way and it is not helping
curl -X GET http://uta:password#localhost:5984/person-data/_design/persondestwo/_view/personviewtwo?key={\"Name\":\"SRINI\",\"Idnum\": \"383896\"}&descending=true
But I get different errors on escape sequences
Overall I just want this query to be satisfied through the view :
select * from person-data where Name="SRINI" and Idnum="383896" orderby lastTimeOfCall
My concern is how to pass the multiple parameters from curl command as I get lot of errors if I do the above way.
First off, you need to use an array as your key. I would use:
function (doc) {
emit([ doc.name, doc.user, doc.lastLoggedIn ], null);
}
This basically outputs all the documents in order by name, then user, then lastLoggedIn. You can use the following URL to query.
/_design/checkdatesorting/_view/sortdatetimefunc?startkey=["srini","pup"]&endkey=["srini","pup",{}]&include_docs=true
Second, notice I did not output doc as the value of your query. It takes up much more disk space, especially if your documents are fairly large. Just use include_docs=true.
Lastly, refer to the CouchDB Wiki, it's pretty helpful.
I just stumbled upon this question. The errors you are getting are caused by not escaping this command:
curl -X GET http://uta:password#localhost:5984/person-data/_design/persondestwo/_view/personviewtwo?startkey=["SRINI","383896"]&endkey=["SRINI","383896",{}]descending=true&include_docs=true
The & character has a special meaning on the command-line and should be escaped when part of an actual parameter.
So you should put quotes around the big URL, and escape the quotes inside it:
curl -X GET "http://uta:password#localhost:5984/person-data/_design/persondestwo/_view/personviewtwo?startkey=[\"SRINI\",\"383896\"]&endkey=[\"SRINI\",\"383896\",{}]descending=true&include_docs=true"

Resources