Dynamic Graph QL variable names - graphql

The following is a graph QL example:
Query:
query ABC(
$x: String!
$y: timestamp!
$z: timestamp!
) {
area
getInTime
getOutTime
}
}
Variable:
{
"x" : "22222",
"y" : "333",
"z" : "4444"
}
Now, is there any way that I could use dynamic variable names instead of the query param names(x,y,z) without changing the param names from the Query.
Thanks.

Related

How to properly escape parameters for JSONB deep query in node-postgres

When bypassing an ORM and doing direct queries to node-postgres, there are a nice pile of weird edge issues to keep in mind. For example, you have probably already encountered the fact that camelCaseColumns have to be in double-quotes, and also parameterized type casting…
client.query(`SELECT id, "authorFirstName", "authorLastName" FROM books WHERE isbn = $1::int`, [1444723448]`)
client.query(`SELECT id FROM books WHERE "authorLastName" = $1::string`, ['King']`)
JSON and JSONB types add another aspect of weirdness. The important thing to keep in mind is, "$1" is not merely a variable placeholder; it is an indicator of a discrete unit of information.
Given a table where characters is a column of type JSONB, this will not work…
client.query(
`SELECT id FROM books WHERE characters #> ([ {'name': $1::string} ])`,
['Roland Deschain']
)
This fails because the unit of information is the JSON object, not a string you're inserting into a blob of text.
This is a little more clear when one looks at a simpler SELECT and an UPDATE…
const userData = await client.query(
`SELECT characters FROM books WHERE id = $1::uuid`,
[ some_book_id ]
)
const newCharacters = JSON.stringify([
...userData[0].characters,
{ name: 'Jake' },
{ name: 'Eddie' },
{ name: 'Odetta' }
])
await this.databaseService.executeQuery(
`UPDATE books SET characters = $1::jsonb WHERE id = $2::uuid`,
[ newCharacters, some_book_id ]
)
The deep search query should be formed thusly:
const searchBundle = JSON.stringify([
{'name': 'Roland Deschain'}
])
client.query(
`SELECT id FROM books WHERE characters #> ($1::jsonb)`,
[searchBundle]
)

Search and extract element located in various path of json structure

I have a json in a PostgreSQL database and I need to extract an array not always located in same place.
Problem
Need to extract array choicies of a particular element name
Element name is known, but not where he's sitting in structure
Rules
All elements name are unique
choicies attribute could not be present
JSON structure
pages : [
{
name : 'page1',
elements : [
{ name : 'element1', choicies : [...]},
{ name : 'element2', choicies : [...]}
]
}, {
name : 'page2',
elements : [
{
name : 'element3',
templateElements : [
{
name : 'element4'
choicies : [...]
}, {
name : 'element5'
choicies : [...]
}
]
}, {
name : 'element6'
choicies : [...]
}
]
},{
name : 'element7',
templateElements : [
{
name : 'element8'
choicies : [...]
}
]
}
]
My try to extract elements by flatten the structure
SELECT pages::jsonb->>'name',
pageElements::jsonb ->> 'name',
pageElements::jsonb -> 'choicies',
pages.*
FROM myTable as myt,
jsonb_array_elements(myt.json -> 'pages') as pages,
jsonb_array_elements(pages -> 'elements') as pageElements
Alas column choicies is always null in my results. And that will not work when element is located somewhere else, like
page.elements.templateElements
page.templateElements
... and so on
I don't know if there is a way to search for a key (name) wherever it's sitting in json structure and extract an other key (choicies).
I wish to call a select with element name in parameter return choicies of this element.
By instance, if I call select with element name (element1 or element4 or element8), choicies array (as rows or json or text, no preference here) of this element should be return.
Wow! Solution founded goes belong expectation! JSONPath was the way to go
Amazing what we can do with this.
SQL
-- Use jsonpath to search, filter and return what's needed
SELECT jsonb_path_query(
myt.jsonb,
'$.** ? (#.name == "element_name_to_look_at")'
)->'choices' as jsonbChoices
FROM myTable as myt
Explanation of jsonpath in SQL
jsonb_path_query(jsonb_data, '$.** ? (#.name == "element_name_to_look_at")')->'choices'
jsonb_path_query : posgresql jsonpath function
jsonb_data : database column with jsonb data or jsonb expression
$.** : search everywhere from root element
? : where clause / filter
# : object return by search
#.name == "element_name_to_look_at" : every object name equals element_name_to_look_at
->'choices' : for each object returned by jsonpath, get choices attribute
Final version
After get choices jsonb array, we return a dataset with every choice.
choices arrays look like this :
[{value:'code1',text:'Code Label 1'}, {value:'code2',text:'Code Label 2'},...]
SELECT choices.*
FROM (
-- Use jsonpath to search, filter and return what's needed
SELECT jsonb_path_query(myt.jsonb, '$.** ? (#.name == "element_name_to_look_at")')->'choices' as jsonbChoices
FROM myTable as myt
) choice,
-- Explode json return array into columns
jsonb_to_recordset(choice.jsonbChoices) as choices(value text, text text);

RethinkDB filter array, return only matched values

I have a table like this
{
dummy: [
"new val",
"new val 2",
"new val 3",
"other",
]
}
want to get only matched values to "new", I am using query like this:
r.db('db').table('table')('dummy').filter(function (val) {
return val.match("^new")
})
but its giving error
e: Expected type STRING but found ARRAY in
what is wrong with query, if I remove .match("^new"), it returns all values
Thanks
The reason of why you're getting Expected type STRING but found ARRAY in is that the value of the dummy field is an array itself, and you cannot apply match to arrays.
Despite the filter you tried may look confusing, you have to rethink your query a bit: just remap the array to a new ^new-keyed array, and then just filter its values out in an inner expression.
For example:
r.db('db')
.table('table')
.getField('dummy')
.map((array) => array.filter((element) => element.match("^new")))
Output:
[
"new val" ,
"new val 2" ,
"new val 3"
]

How do I dynamically name a collection?

Title: How do I dynamically name a collection?
Pseudo-code: collect(n) AS :Label
The primary purpose of this is for easy reading of the properties in the API Server (node application).
Verbose example:
MATCH (user:User)--(n)
WHERE n:Movie OR n:Actor
RETURN user,
CASE
WHEN n:Movie THEN "movies"
WHEN n:Actor THEN "actors"
END as type, collect(n) as :type
Expected output in JSON:
[{
"user": {
....
},
"movies": [
{
"_id": 1987,
"labels": [
"Movie"
],
"properties": {
....
}
}
],
"actors:" [ .... ]
}]
The closest I've gotten is:
[{
"user": {
....
},
"type": "movies",
"collect(n)": [
{
"_id": 1987,
"labels": [
"Movie"
],
"properties": {
....
}
}
]
}]
The goal is to be able to read the JSON result with ease like so:
neo4j.cypher.query(statement, function(err, results) {
for result of results
var user = result.user
var movies = result.movies
}
Edit:
I apologize for any confusion in my inability to correctly name database semantics.
I'm wondering if it's enough just to output the user and their lists of both actors and movies, rather than trying to do a more complicated means of matching and combining both.
MATCH (user:User)
OPTIONAL MATCH (user)--(m:Movie)
OPTIONAL MATCH (user)--(a:Actor)
RETURN user, COLLECT(m) as movies, COLLECT(a) as actors
This query should return each User and his/her related movies and actors (in separate collections):
MATCH (user:User)--(n)
WHERE n:Movie OR n:Actor
RETURN user,
REDUCE(s = {movies:[], actors:[]}, x IN COLLECT(n) |
CASE WHEN x:Movie
THEN {movies: s.movies + x, actors: s.actors}
ELSE {movies: s.movies, actors: s.actors + x}
END) AS types;
As far as a dynamic solution to your question, one that will work with any node connected to your user, there are a few options, but I don't believe you can get the column names to be dynamic like this, or even the names of the collections returned, though we can associate them with the type.
MATCH (user:User)--(n)
WITH user, LABELS(n) as type, COLLECT(n) as nodes
WITH user, {type:type, nodes:nodes} as connectedNodes
RETURN user, COLLECT(connectedNodes) as connectedNodes
Or, if you prefer working with multiple rows, one row each per node type:
MATCH (user:User)--(n)
WITH user, LABELS(n) as type, COLLECT(n) as collection
RETURN user, {type:type, data:collection} as connectedNodes
Note that LABELS(n) returns a list of labels, since nodes can be multi-labeled. If you are guaranteed that every interested node has exactly one label, then you can use the first element of the list rather than the list itself. Just use LABELS(n)[0] instead.
You can dynamically sort nodes by label, and then convert to the map using the apoc library:
WITH ['Actor','Movie'] as LBS
// What are the nodes we need:
MATCH (U:User)--(N) WHERE size(filter(l in labels(N) WHERE l in LBS))>0
WITH U, LBS, N, labels(N) as nls
UNWIND nls as nl
// Combine the nodes on their labels:
WITH U, LBS, N, nl WHERE nl in LBS
WITH U, nl, collect(N) as RELS
WITH U, collect( [nl, RELS] ) as pairs
// Convert pairs "label - values" to the map:
CALL apoc.map.fromPairs(pairs) YIELD value
RETURN U as user, value

Faster query by value

I want to query MongoDB to find, in the results top level document, how many nested documents of it have value 0.
For instance, in this collection:
{name: "mary", results: {"foo" : 0, "bar" : 8}}
{name: "bob", results: {"baz" : 9, "qux" : 0}}
{name: "leia", results: {"foo" : 9, "norf" : 5}}
my query should return 2, because two of the documents have 0 as a value of a nested document of results.
Here's my attempt
db.collection.find({$where : function() {
for (var key in this.results) {
if (this.results[key] === 0) { return true;} } return false; } })
which works on the above dataset, but is too slow. My real data are 100k documents, each having 500 nested documents inside results, and the above query takes a few minutes. Is it possible to design this query in a faster way?
There is no way to do it, other than the one you are doing.
You can only change the schema or use aggregations but I don't think that this is what you want.
There is a post about it you can check here:
mongoDB: find by embedded value

Resources