Using AJV in Postman seems to ignore nesting/children - validation

I'm trying to validate JSON response messages according to schema.
But as soon as an attribute has the type "array" or "object", AJV seems to ignore them and does not see the nested attribute within that object or array.
I started trying to break up the schema into multiple smaller ones so all objects/arrays are covered, but this is a tedious task and I'm hoping I'm just using AJV incorrectly.
Example schema (simplified):
{
"type": "object",
"required": [
"MainThing"
],
"properties": {
"MainThing": {
"type": "string",
"description": "Some kind of string.",
"example": "This is some kind of string."
},
"NestedThing": {
"type": "object",
"required": [
"NestedAttribute"
],
"properties": {
"NestedAttribute": {
"type": "String",
"description": "A nested string.",
"example": "This is a nested string"
}
}
}
}
}
If I change the NestedAttribute to an Integer instead of a String, the schema validation in Postman will tell me the response is valid against the schema. While it's not, because the response will contain a NestedAttribute which is a String value.
The following response would be deemed valid, while it's not:
{
"MainThing": "The main thing",
"NestedThing": {
"NestedAttribute": 12345
}
}
My Postman testcode:
var Ajv = require('ajv'),
ajv = new Ajv({logger: console}),
schema = JSON.parse(pm.variables.get("THIS_IS_MY_AWESOME_SCHEMA"));
pm.test('My schema is valid', function() {
var data = pm.response.json();
pm.expect(ajv.validate(schema, data)).to.be.true;
});
If anyone can point me in the right direction, that would be greatly appreciated!

Related

How to Marshal/Unmarshal a common JSON & BSON key/field that can have two different formats in Go?

I currently have mongo data stored in two forms (specifically for content key) in a collection. Partial sample data shown below:
Format 1.
{
"type": "text",
"content": "foobar",
"extraTextData": "hello text"
}
Format 2
{
"type": "group",
"content": [
{
"type": "text",
"content": "grouped-foobar"
},
{
"type": "image",
"url": "https://abc.jpg"
},
],
"extraGroupData": "hello group"
}
My attempt to structure this in golang is below.
type C struct {
Type string `json:"type" bson:"type"`
Content ???
*TextC
*GroupC
}
type TextC struct {
ExtraTextData `json:"extraTextData" bson:"extraTextData"`
}
type GroupC struct {
ExtraGroupData `json:"extraGroupData" bson:"extraGroupData"`
}
I am having issues on how to setup the structure for "content" field that works for both the formats, TextC and GroupC.
Content for GroupC can be array of C like - Content []C
Content for TextC can also be string type.
Can someone please help & give an example on how to tackle this situation?
Format2 json is Invalid. You can check it here : https://jsonlint.com/
I have created a sample scenario for your case.
You can try it here: https://go.dev/play/p/jaUE3rjI-Ik
Use interface{} like this:
type AutoGenerated struct {
Type string `json:"type"`
Content interface{} `json:"content"`
ExtraTextData string `json:"extraTextData,omitempty"`
ExtraGroupData string `json:"extraGroupData,omitempty"`
}
And you should also remove comma from Format2:
{
"type": "group",
"content": [
{
"type": "text",
"content": "grouped-foobar"
},
{
"type": "image",
"url": "https://abc.jpg"
}
],
"extraGroupData": "hello group"
}
If you don't remove comma then it will through an error like this:
invalid character ']' looking for beginning of value

GraphQL - How to get field types from the retrieved schema?

Knowing the schema (fetched via getIntrospectionQuery), how could I get the type of a particular field?
For example, say I run this query:
query {
User {
name
lastUpdated
friends {
name
}
}
}
and get this result:
{
"data": {
"User": [
{
"name": "alice",
"lastUpdated": "2018-02-03T17:22:49+00:00",
"friends": []
},
{
"name": "bob",
"lastUpdated": "2017-09-01T17:08:49+00:00",
"friends": [
{
"name": "eve"
}
]
}
]
}
}
I'd like to know the types of the fields and construct something like this:
{
"name": "String",
"lastUpdated": "timestamptz",
"friends": "[Friend]"
}
How could I do that without extra requests to the server?
After retrieving the schema, you can build it into a JSON object (if your graphql framework does not do it already for you).
Using a JSON parser, you can retrieve the the types of each field.
I will not enter into the detail, as it would depend on the technology your are using.

DynamoDB DocumentClient returns Set of strings (SS) attribute as an object

I'm new to DynamoDB.
When I read data from the table with AWS.DynamoDB.DocumentClient class, the query works but I get the result in the wrong format.
Query:
{
TableName: "users",
ExpressionAttributeValues: {
":param": event.pathParameters.cityId,
":date": moment().tz("Europe/London").format()
},
FilterExpression: ":date <= endDate",
KeyConditionExpression: "cityId = :param"
}
Expected:
{
"user": "boris",
"phones": ["+23xxxxx999", "+23xxxxx777"]
}
Actual:
{
"user": "boris",
"phones": {
"type": "String",
"values": ["+23xxxxx999", "+23xxxxx777"],
"wrapperName": "Set"
}
}
Thanks!
The [unmarshall] function from the [AWS.DynamoDB.Converter] is one solution if your data comes as e.g:
{
"Attributes": {
"last_names": {
"S": "UPDATED last name"
},
"names": {
"S": "I am the name"
},
"vehicles": {
"NS": [
"877",
"9801",
"104"
]
},
"updatedAt": {
"S": "2018-10-19T01:55:15.240Z"
},
"createdAt": {
"S": "2018-10-17T11:49:34.822Z"
}
}
}
Please notice the object/map {} spec per attribute, holding the attr type.
Means you are using the [dynamodb]class and not the [DynamoDB.DocumentClient].
The [unmarshall] will Convert a DynamoDB record into a JavaScript object.
Stated and backed by AWS. Ref. https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/DynamoDB/Converter.html#unmarshall-property
Nonetheless, I faced the exact same use case, as yours. Having one only attribute, TYPE SET (NS) in my case, and I had to manually do it. Next a snippet:
// Please notice the <setName>, which represents your set attribute name
ddbTransHandler.update(params).promise().then((value) =>{
value.Attributes[<setName>] = value.Attributes[<setName>].values;
return value; // or value.Attributes
});
Cheers,
Hamlet

Parse Query by subfield/dot notation

tl;dr
Can ParseCloud/MongoDB filter by Pointer<class>.filed ? By
Pointer<class>.Pointer<class> ? By existence of data in that filed?
Long question:
Round is object which will be played automatically when time will come.
Payment object which indicates that user made payment. When payment being spent we set field round to it.
Player which links online User with Payment
I need to query player for few conditions:
Player
online
has valid(no round and valid equal to 'valid') payment
Player
user equal to specific user
has no payment
Player
user equal to specific user
has valid(no round and valid equal to 'valid') payment
And I made everything to work except validating Payment inside Player query.
Here is condition 1 from the list.
var query = new Parse.Query(keys.Player);
query.skip(0);
query.limit(oneRoundMaxPlayers);
query.greaterThanOrEqualTo(keys.last_online_date, lastAllowedOnline);
// looks like no filter applied here
query.doesNotExist("payment.round");
query.exists(keys.payment);
// This line will make query return 0 elements
// query.equalTo("payment.valid", "valid");
query.include(keys.user);
query.include(keys.payment);
Here is 2 OR 3
var queryPaymentExists = new Parse.Query(keys.Player);
queryPaymentExists.skip(0);
queryPaymentExists.limit(1);
queryPaymentExists.exists(keys.payment);
//This line not filtering
queryPaymentExists.doesNotExist(keys.payment + "." + keys.round);
queryPaymentExists.equalTo(keys.user, user);
// This line makes query always return 0 elements
// queryPaymentExists.equalTo(keys.payment + "." + keys.valid, keys.payment_valid);
var queryPaymentDoesNotExist = new Parse.Query(keys.Player);
queryPaymentDoesNotExist.skip(0);
queryPaymentDoesNotExist.limit(1);
queryPaymentDoesNotExist.doesNotExist(keys.payment);
queryPaymentDoesNotExist.equalTo(keys.user, user);
var compoundQuery = Parse.Query.or(queryPaymentExists, queryPaymentDoesNotExist);
compoundQuery.include(keys.user);
compoundQuery.include(keys.payment);
compoundQuery.include(keys.payment + "." + keys.round);
I've checked logs from Mongo and they looks following
verbose: REQUEST for [GET] /classes/Player: {
"include": "user,payment,payment.round",
"where": {
"$or": [
{
"payment": {
"$exists": true
},
"payment.round": {
"$exists": false
},
"user": {
"__type": "Pointer",
"className": "_User",
"objectId": "ASPKs6UVwb"
}
},
{
"payment": {
"$exists": false
},
"user": {
"__type": "Pointer",
"className": "_User",
"objectId": "ASPKs6UVwb"
}
}
]
}
}
Here is response:
verbose: RESPONSE from [GET] /classes/Player: {
"response": {
"results": [
{
"objectId": "VHU9uwmLA7",
"last_online_date": {
"__type": "Date",
"iso": "2017-10-28T15:15:23.547Z"
},
"user": {
"objectId": "ASPKs6UVwb",
"username": "cn92Ekv5WPJcuHjkmTajmZMDW",
},
"createdAt": "2017-10-22T11:43:16.804Z",
"updatedAt": "2017-10-25T09:23:20.035Z",
"ACL": {
"*": {
"read": true
},
"ASPKs6UVwb": {
"read": true,
"write": true
}
},
"__type": "Object",
"className": "_User"
},
"createdAt": "2017-10-27T21:03:35.442Z",
"updatedAt": "2017-10-28T15:15:23.556Z",
"payment": {
"objectId": "nr7ln7U3eJ",
"payment_date": {
"__type": "Date",
"iso": "2017-10-27T23:42:50.614Z"
},
"user": {
"__type": "Pointer",
"className": "_User",
"objectId": "ASPKs6UVwb"
},
"createdAt": "2017-10-27T23:42:50.624Z",
"updatedAt": "2017-10-28T15:12:30.131Z",
"valid": "valid",
"round": {
"objectId": "jF9gqG4ndh",
"round_date": {
"__type": "Date",
"iso": "2017-10-28T15:12:00.027Z"
},
"createdAt": "2017-10-28T15:11:00.036Z",
"updatedAt": "2017-10-28T15:12:30.108Z",
,
"ACL": {
"*": {
"read": true
}
},
"__type": "Object",
"className": "Round"
},
"ACL": {
"ASPKs6UVwb": {
"read": true
}
},
"__type": "Object",
"className": "Payment"
},
"ACL": {
"ASPKs6UVwb": {
"read": true
}
}
}
]
}
}
You can see that response contains payment.round.
My question is following:
Can ParseCloud/MongoDB filter by Pointer<class>.filed ? By Pointer<class>.Pointer<class> ? By existence of data in that filed?
How can I workaround in situation when I need to check field presence if User can have may Players, User can have many Payments.
UPD
As far as I found mongo should support filtering by "dot notation"
mongodb query by sub-field
So what am I doing wrong?
Short answer:
No
Simplify your data structure
Long answer:
Dot notation can be used to
include documents of pointers, as you already did in your code, e.g. include(keys.user)
filter for properties of fields, e.g. {properyA: 1, propertyB: 2}. All the data is in the field, not in another document in another collection that is referenced by a Parse pointer.
Dot notation cannot be used as filter parameter for referenced pointers in a Parse query. MongoDB also does not support such a filtering, the concept of pointer is one by Parse and not by MongoDB. In a NoSQL environment like MongoDB there are no relations between tables to be used in the query language, as it is not a "relational database" like an SQL database. However Parse provides some comfort of an SQL for simple queries with its concepts of pointer, compoundQuery and matchesKeyInQuery.
If that is not sufficient in your case, simply add the fields to the collection. To the expense that you may have the same fields and data in multiple collections but with the advantage of faster query execution time.
Finding the right data structure is one of the big topics for NoSQL as there is no general right structure. The collections and document structures are basically designed as a trade off between:
execution performance
query necessity / frequency
security (access level)
and data storage size
And they are liquid and can change over time. As your app and its queries mutate you'd also change the data structure if the long term gain is greater than the one time effort.

Confusion creating swagger.json file from proto file

I've created a proto file with all the necessary messages and rpc functions for a REST webservice I intend to generate. Using the protoc-gen-swagger plugin I've managed to compile that proto file into a swagger.json file and all seems well, except for two things, which I can't seem to work out.
All the definitions in the swagger.json file has the name of my proto file's package prefixed onto them. Is there a way to get rid of this?
All of the fields of my messages are "optional". They're not specified explicitly as such but they're not specified to be "required" which, by definition, makes them optional. Proto3 no longer supports required/optional/repeating but even if I use Proto2 and add those keywords, it doesn't seem to affect the swagger.json ouput. How do I specifiy in my proto file that a field is required such that protoc-gen-swagger will add the required section to the json output?
Here is a sample of a very basic proto file:
webservice.proto
syntax = "proto3";
package mypackage;
import "google/api/annotations.proto";
service MyAPIWebService {
rpc MyFunc (MyMessage) returns (MyResponse) {
option (google.api.http) = {
post: "/message"
body: "*"
};
}
}
message MyMessage {
string MyString = 1;
int64 MyInt = 2;
}
message MyResponse {
string MyString = 1;
}
This is then compiled into a swagger.json file with the following command:
protoc -I. -I"%GOPATH%/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis" --swagger_out=logtostderr=true:. webservice.proto
which yields the following output:
webservice.swagger.json
{
"swagger": "2.0",
"info": {
"title": "webservice.proto",
"version": "version not set"
},
"schemes": [
"http",
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/message": {
"post": {
"operationId": "MyFunc",
"responses": {
"200": {
"description": "",
"schema": {
"$ref": "#/definitions/mypackageMyResponse"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/mypackageMyMessage"
}
}
],
"tags": [
"MyAPIWebService"
]
}
}
},
"definitions": {
"mypackageMyMessage": {
"type": "object",
"properties": {
"MyString": {
"type": "string"
},
"MyInt": {
"type": "string",
"format": "int64"
}
}
},
"mypackageMyResponse": {
"type": "object",
"properties": {
"MyString": {
"type": "string"
}
}
}
}
}
Notice how MyMessage and MyResponse in the proto file translates to mypackageMyMessage and mypackageMyResponse in the json file.
If I wanted, for instance, MyMessage:MyString to be required, I'd have to add a section to the "mypackageMyMessage" section in "definitions" that looks like this:
"required":[
"MyString"
]
I would definitely prefer if there was a way I could specify that in the proto file already so that I don't have to manually edit the json file every time I compile it.
Posting here for anyone else who comes across this question looking for the same information.
UPDATE
This is where the code defines how definitions are created.
https://github.com/grpc-ecosystem/grpc-gateway/blob/master/protoc-gen-swagger/genswagger/template.go#L859
This is how you can denote fields as required -- add a custom option to your message definition:
message MyMessage {
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_schema) = {
json_schema: {
title: "MyMessage"
description: "Does something neat"
required: ["MyString"]
}
};
string MyString = 1;
int64 MyInt = 2;
}
Your prefix mypackage is a part of namespacing in .proto file. It is coming from line: package mypackage; delete this line and regenerate json

Resources