How to properly use the self validation expressions #?? - enums

So I have a response with the field role is of type enum
'{
"content": [
{
"id": "1",
"roles": [],
},
{
"id": "2",
"roles": [
"manager"
]
},
{
"id": "3",
"roles": [
"client"
],
}
],
}'
I want to validate that the role in response is of type enum so I was trying something like
* match each response..role == ['#? enumRoles.contains(_)'}
But kept getting errors. The solution I was trying is dervied from
How to validate multiple possible values in Karate using a schema. Then I found out that the accepted solution there was also yielding unexpected end of file and message not supported errors.
Any help is appreciated. I am using 1.2.0.RC1

This is more trickier than I expected, but here is a solution. Please read the docs to understand what the API does.
* def response =
"""
{
"content": [
{
"id": "1",
"roles": [],
},
{
"id": "2",
"roles": [
"manager"
]
},
{
"id": "3",
"roles": [
"client"
],
}
]
}
"""
* def roles = ['manager', 'client']
* def temp1 = $..roles[*]
* def temp2 = karate.distinct(temp1)
* match roles contains temp2
There is no such thing as enum in plain JSON. Consider writing custom JS functions if needed, here is an example: https://stackoverflow.com/a/62567412/143475

Related

Delete existing Records if they are not in sent array Rails 5 API

I need help on how to delete records that exist in the DB but not in array sent in a request;
My Array:
[
{ "id": "509",
"name": "Motions move great",
"body": "",
"subtopics": [
{
"title": "Tywan",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
},
{
"title": "Transportations Gracious",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
},
{
"title": "Transportation part",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
}
]
},
{
"name": "Motions kkk",
"body": "",
"subtopics": [
{
"title": "Transportations",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
}
]
}
]
Below is my implementation: where am going wrong?
#topics = #course.topics.map{|m| m.id()}
#delete= #topics
puts #delete
if Topic.where.not('id IN(?)', #topics).any?
#topics.each do |topic|
topic.destroy
end
end
it's not clear to me where, in your code, you pick the ids sent in the array you showed before... so I'm assuming like this:
objects_sent = [
{ "id": "509",
"name": "Motions move great",
"body": "",
"subtopics": [
{
"title": "Tywan",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
},
{
"title": "Transportations Gracious",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
},
{
"title": "Transportation part",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
}
]
},
{
"name": "Motions kkk",
"body": "",
"subtopics": [
{
"title": "Transportations",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
}
]
}
]
since you have your array like this, the only information you need to query on database is the ids (also, assuming the id's in the array are the id's on database, otherwise it wouldn't make sense). You can get them like this:
sent_ids = objects_sent.map{|o| o['id'].to_i}
Also, it seems to me that, for the code you showed, you want to destroy them based on a specific course. There would be 2 ways to do that. First, using the relationship (I prefer like this one):
#course.topics.where.not(id: sent_ids).destroy_all
Or you can do the query directly on the Topic model, but passing the course_id param:
Topic.where(course_id: #course.id).where.not(id: sent_ids).destroy_all
ActiveRecord is smart enough to mount that query correctly in both ways. Give it a test and see which works better for you

How to cleanly batch queries together in Gremlin

I am writing a GraphQL resolver that retrieves all vertices by a particular edge using the following query (created returns label person):
software {
created {
name
}
}
Which would resolve to the following Gremlin Query for each software node found:
g.V().hasLabel('software').has('name', 'ripple').in('created')
This returns a result that includes all properties of the object:
{
"result": [
{
"#type": "d",
"#rid": "#24:0",
"#version": 6,
"#class": "person",
"in_knows": [
"#35:0"
],
"name": "josh",
"out_created": [
"#32:0",
"#33:0"
],
"age": 32,
"#fieldTypes": "in_knows=g,out_created=g"
}
],
"dbStats": {
...
}
}
I realize that this will fall foul on GraphQL's N+1 query so i'm trying to batch queries together using a Dataloader pattern. (i'm also hoping to do property selections, so i'm not asking the database to return too much info)
So i'm trying to craft a query like so:
g.V().union(
__.hasLabel('software').has('name', 'ripple').
project('parent', 'child').by('id').
by(__.in('created').fold()),
__.hasLabel('software').has('name', 'lop').
project('parent', 'child').by('id').
by(__.in('created').fold())
)
But this results in the following where the props are missing and it just includes the id of the vertices I want:
{
"result": [
{
"parent": "ripple",
"child": [
"#24:0"
]
},
{
"parent": "lop",
"child": [
"#22:0",
"#23:0",
"#24:0"
]
}
],
"dbStats": {
...
}
}
My Question is, how can I have the Gremlin query return all of the props for the found vertices and none of the other props? Should I even been doing batching this way?
For anyone else reading, the query I was trying to write wouldn't work because the TraversalSet created in the .by(_.in('created') can't be cast from a List to an ElementMap as the stream cardinality wouldn't be enforced. (You can only have one record per row, I think?)
My working query would be to duplicate the keys for each row and specify the props needed (the query below is ok for gremlin 3.3 as used in ODB, otherwise if you've got < gremlin 3.4 replace the last by step with be(elementMap('name', 'age')):
g.V().union(
__.hasLabel('software').has('name', 'ripple').
as('parent').
in('created').as('child').
select('parent', 'child').
by(values('name')).
by(properties('id', 'name', 'age').
group().by(__.key()).
by(__.value())),
__.hasLabel('software').has('name', 'lop').
as('parent').
in('created').as('child').
select('parent', 'child').
by(values('name')).
by(properties('id', 'name', 'age').
group().by(__.key()).
by(__.value()))
)
So that you get a result like this:
{"data": [
{
"parent": "ripple",
"child": {
"id": 5717,
"name": "josh",
"age": 32
}
},
{
"parent": "lop",
"child": {
"id": 5709,
"name": "peter",
"age": 35
}
},
{
"parent": "lop",
"child": {
"id": 5713,
"name": "marko",
"age": 29
}
},
{
"parent": "lop",
"child": {
"id": 5717,
"name": "josh",
"age": 32
}
}
]
}
Which would allow you to create a lookup where you concat all results for "lop" and "ripple" into arrays.

Apollo readQuery Fails Even Though Target Object is Present?

I'm working on a call to readQuery. I'm getting an error message:
modules.js?hash=2d0033b4773d9cb6f118946043f7a3d4385825fe:25847
Error: Can't find field resolutions({"id":"Resolution:DHSzPa8bvPCDjuAac"})
on object (ROOT_QUERY) {
"resolutions": [
{
"type": "id",
"id": "Resolution:AepgCCio9KWGkwyMC",
"generated": false
},
{
"type": "id",
"id": "Resolution:DHSzPa8bvPCDjuAac", // <==ID I'M SEEKING
"generated": false
}
],
"user": {
"type": "id",
"id": "User:WWv57KsvqWeAoBNHY",
"generated": false
}
}.
The object with that id appears to be plainly visible as the second entry in the list of resolutions.
Here's my query:
const GET_CURRENT_RESOLUTION_AND_GOALS = gql`
query Resolutions($id: String!) {
resolutions(id: $id) {
_id
name
completed
goals {
_id
name
completed
}
}
}
`;
...and here's how I'm calling it:
<Mutation
mutation={CREATE_GOAL}
update={(cache, {data: {createGoal}}) => {
let id = 'Resolution:' + resolutionId;
const {resolutions} = cache.readQuery({
query: GET_CURRENT_RESOLUTION_AND_GOALS,
variables: {
id
},
});
}}
>
What am I missing?
Update
Per the GraphQL Dev Tools extension for Chrome, here's the whole GraphQL data store:
{
"data": {
"resolutions": [
{
"_id": "AepgCCio9KWGkwyMC",
"name": "testing 123",
"completed": false,
"goals": [
{
"_id": "TXq4nvukpLcqQhMRL",
"name": "test goal abc",
"completed": false,
"__typename": "Goal"
},
],
"__typename": "Resolution"
},
{
"_id": "DHSzPa8bvPCDjuAac",
"name": "testing 345",
"completed": false,
"goals": [
{
"_id": "PEkg5oEEi2tJ6i8LH",
"name": "goal abc",
"completed": false,
"__typename": "Goal"
},
{
"_id": "X4H4dFzGm5gkq5bPE",
"name": "goal bcd",
"completed": false,
"__typename": "Goal"
},
{
"_id": "hYunrXsMq7Gme7Xck",
"name": "goal cde",
"completed": false,
"__typename": "Goal"
}
"__typename": "Resolution"
}
],
"user": {
"_id": "WWv57KsvqWeAoBNHY",
"__typename": "User"
}
}
}
Posted as answer for fellow apollo users with similar problems:
Remove the prefix of Resolution:, the query should only take the id.
Then the question arises how is your datastore filled?
To read a query from cache, the query needs to have been called with exactly the same arguments on the remote API before. This way apollo knows what the result for a field is with specific arguments. If you never called the remote endpoint with the arguments you want to use but know what the result would be, you can circumvent that and resolve the query locally by implementing a cache resolver. Have a look at the example in the documentation. Here the store contains a list of books (in your case resultions) and the query for a single book by id can be resolved with a simple cache lookup.

Logstash 5.0 Ruby Filter Can't Update Hash in Array

I'm a newbie to both Logstash and Ruby, and I meet a subtle problem today.
My input JSON like the following:
{
"1": "1",
"2": "2",
"market": [
{
"id": "1",
"name": "m1"
},
{
"id": "2",
"name": "m2"
}
]
}
My filter is like the following code, and I want to set event["1"] to m1, event["2"] to m2, event["market"][0]["id"] to m1, event["market"][1]["id"] to m2:
filter {
......
ruby {
code => "
markets = event.get('market')
markets.each_index do |index|
event.set(markets[index]['id'], markets[index]['name'])
markets[index]['id'] = markets[index]['name']
end
"
}
......
}
And the output is following:
{
"1": "m1",
"2": "m2",
"market": [
{
"id": "1",
"name": "m1"
},
{
"id": "2",
"name": "m2"
}
]
}
The event["1"] and event["2"] get the expected values, but the event["market"][0]["id"] and event["market"][1]["id"] do not, and I want to know why? The desired output should be:
{
"1": "m1",
"2": "m2",
"market": [
{
"id": "m1",
"name": "m1"
},
{
"id": "m2",
"name": "m2"
}
]
}
PS: The logstash I'm using is version 5.0.
I think it is because of the new Event API introduced in the Logstash 5.0. After changing my filter to the following, I get the desired output:
filter {
......
ruby {
code => "
markets = event.get('market')
markets.each_index do |index|
event.set(markets[index]['id'], markets[index]['name'])
markets[index]['id'] = markets[index]['name']
end
event.set('market', markets) // comment: adding this setter in the filter
"
}
......
}
According to Logstash Git Issue, "Mutating a collections after setting it in the Event has an undefined behaviour".

Filter by languages in 2nd and 4th level keys of a couchdb document

Given the following document in CouchDB....
{
"_id": "002bafd55b353692a7ab2968074310cc2cbff258",
"_rev": "1-bc853056ac61d817ae3c4ecb4f81322b",
"names": [
{ "locale": "en", "value": "Example" },
{ "locale": "de", "value": "Beispiel" },
{ "locale": "fr", "value": "Exemple" }
],
"details": [
{ "locale": "en", "value": "An Example is here" },
{ "locale": "de", "value": "Ein Beispiel ist heir" }
{ "locale": "en", "value": "Un exemple est ici" }
]
}
...how can I write a view that will allow me to return a partial document with
the undesired languages filtered out?
curl ..snip.. '_design/locale_filter/?locale=en,de,fr,it'
curl ..snip.. '_design/locale_filter/?locale=en,fr'
curl ..snip.. '_design/locale_filter/?locale=en'
Should return something looking like this:
{
"_id": "002bafd55b353692a7ab2968074310cc2cbff258",
"_rev": "1-bc853056ac61d817ae3c4ecb4f81322b",
"names": [
{ "locale": "en", "value": "Example" },
],
"details": [
{ "locale": "en", "value": "An Example is here" },
]
}
There's also a sub-case, where the documents have a further deeper structure,
which repeats the names and details structure, these would also be
filtered in an ideal world:
{
"_id": "002bafd55b353692a7ab2968074310cc2cbff258",
"_rev": "1-bc853056ac61d817ae3c4ecb4f81322b",
"names": [ ... snip ... ],
"details": [ ... snip ... ]
"deeper": {
"names": [
{ "locale": "en", "value": "Sub-Example" },
],
"details": [
{ "locale": "en", "value": "The Sub-Example is here" },
}
}
I also note that this might not be a view, but rather a show, from the
documentation couchdb says that a show is for transforming documents into any
format.
The final query from a beginner is whether there's some way to make it easier
to work on couchdb views and design docs, right now I'm experimenting with
erica which feels like overkill as I'm
pretty sure I don't want a couch app, I just want to easily maintain my views
in files on the disk, and sync them with the couch database whenever I've made
significant enough changes.
I was able to implement this using a show function, I implemented two show functions, one for convenience:
(doc, req) ->
all_locales = []
for name in doc.names
all_locales.push name.locale
toJSON(all_locales)
(I also implemented it on details, and remove duplicate locales in my real code)
This allows me to do the following:
GET /_design/dbname/_show/list_locales/c0db9ad..snip..
and returns ["en", "de", "fr"], for example - whatever locales the language happens to have.
I can then follow up with the function to retrieve the filtered document:
(doc, req) ->
locales = req.query.locales.split(",")
doc.names = doc.names.filter (name) ->
locales.indexOf(name.locale) > -1
doc.overviews = doc.details.filter (overview) ->
locales.indexOf(overview.locale) > -1
return toJSON(doc) + "\n"
The usage pattern for this is:
GET /_design/dbname/_show/restrict_locales/c0db9ad..snip..?locales=en,fr
GET /_design/dbname/_show/restrict_locales/c0db9ad..snip..?locales=fr
GET /_design/dbname/_show/restrict_locales/c0db9ad..snip..?locales=en,fr,de,it,hu,zh
It works quite remarkably well, and was much faster than I expected. I believe the show function results are aggressively cached by CouchDB.

Resources