GraphQL Dynamic Query with where clause - graphql

I am trying to query jobs based on three variable parameters.
Location
Category
Type
Below is my Query:
query getJobsSearch($location: String, $type: String, $category: String) {
jobs(order_by: {created_at: desc}, where: {deleted_at: {_is_null: true}, created_at: {_gt: "2021-06-16T10:06:38.551984+00:00"}, job_category: {slug: {_eq: $category}}, location: {slug: {_eq: $location}}, job_type: {slug: {_eq: $type}}}, limit: 50) {
jobId
title
company_name
job_category {
name
slug
}
job_type {
name
slug
}
isRemote
location {
city
slug
}
created_at
}
}
Passing the variables as follows:
{"location": "new-delhi", "type": "full-time", "category": "finance"}
To be able to now call all the jobs with no filters I am passing empty strings in the variable as follows
{"location": "", "type": "", "category": ""}
When I pass the above I am getting zero results back but my tables have data in them.
Also when I need to query just based on location I am trying to pass the following variables with type and category as empty strings:
{"location": "new-delhi", "type": "", "category": ""}
How can we reset the variables so that I get back all the listing without being filtered based on location, category or type?
Is it the right way to pass "" when you do not want to apply the where clause for a query.
In Ruby we are so used to chaining the query this feels weird and complex. Hope to get some help.

Related

Streamsets Data Collector: Replace a Field With Its Child Value

I have a data structure like this
{
"id": 926267,
"updated_sequence": 2304899,
"published_at": {
"unix": 1589574240,
"text": "2020-05-15 21:24:00 +0100",
"iso_8601": "2020-05-15T20:24:00Z"
},
"updated_at": {
"unix": 1589574438,
"text": "2020-05-15 21:27:18 +0100",
"iso_8601": "2020-05-15T20:27:18Z"
},
}
I want to replace the updated_at field with its unix field value using Streamsets Data Collector. As far as I know, it can be done using field replacer. But I'm still didn't get it how to make a mapping expression. How can I achieve that?
In Field Replacer, set Fields to /rec/updated_at and New value to ${record:value('/rec/updated_at/unix')} and it will replace the value. See below.
Cheers,
Dash

is there a way to group queries in graphQL?

I'm trying to group graphQL queries to have a more organized response.
I want to make a query for allEmployees and get back something in the following format
GraphQL Query
{
Employees:allEmployees{
id
firstName
lastName
}
}
Response
{
"data": {
"Employees": [
"new":[
{
"id": "1",
"firstName": "James",
"lastName": "Test"
},
{
"id": "3",
"firstName": "Charles",
"lastName": "Tes"
}
],
"updated":[
{
"id": "4",
"lastName": "Test"
},
],
"deleted":[
{
"id": "1",
},
],
}
}
}
I've looked into a few options to get named sub-request( like new, updated and deleted) via aliases on fragments but that doesn't seem to be a thing. I've looked at unions, but that doesn't seem to be what I'm looking for.
Ideally I would love to query graphql like...
{
Employees:{
new: allEmployees(status:"new"){
id
firstName
lastName
}
updated: allEmployees(status:"updated"){
id
firstName
lastName
}
deleted: allEmployees(status:"deleted"){
id
}
}
but I don't think it is possible to pass a nested query like this.
Is there anyway to do something like this? I'm using graphql with ruby via the graphql-ruby gem.
please let me know if anyone needs more information?
Thanks
Edit
To clarify. We have multiple entities that will follow the new, updated, deleted pattern. Looking to try and get a response where the results are nested inside a parent name/alias (Employees, Users)
{
"data": {
"Employees": [
"new":[...],
"updated":[...],
"deleted":[...],
],
"Users": [
"new":[...],
"updated":[...],
"deleted":[...],
],
...
}
That is why we would want to nest
GraphQL definitely supports nested queries and multiple top-level queries, and graphql-ruby supports these just fine.
If your GraphQL schema looks like:
type Employee {
id: ID!
firstName: String
lastName: String
}
enum Status { NEW, UPDATED, DELETED }
type Query {
allEmployees(status: Status): [Employee!]!
}
then you could write a query
fragment EmployeeData on Employee { id firstName lastName }
query Everyone {
new: allEmployees(status: NEW) { ... EmployeeData }
updated: allEmployees(status: UPDATED) { ... EmployeeData }
deleted: allEmployees(status: DELETED) { ... EmployeeData }
}
That wouldn't have quite the specific form you're looking for – there aren't good ways to add or remove arbitrary levels in your query, like adding an "Employees" label or removing layers from React-style connection records – but it can retrieve the data you're looking for.

Order by nested object in RethinkDB using Go driver

How is it possible using Go driver fetch data from RethinkDB in order by of nested object?
So let's imagine I have such json in my table:
[
{
"id": "1",
"date": "2001-01-15",
"time": {
"begin": "09:00",
"end": "10:30"
}
},
{
"id": "2",
"date": "2001-01-16",
"time": {
"begin": "08:30",
"end": "10:30"
}
}
]
Go model is:
type MyTime struct {
Begin time.Time `json:"begin"`
End time.Time `json:"end"`
}
type Something struct {
Id string `json:"id"`
Date time.Time `json:"date"`
Time MyTime `json:"time"`
}
Example how it is possible to order by id:
var result []Something
db.Table("someTable").OrderBy("id").Run(session).All(&result)
I tried to order by beginning of time like this (thinking that the approach is the same as in ArangoDB, but apparently it is not):
var result []Something
db.Table("someTable").OrderBy("time.begin").Run(session).All(&result)
I saw an example on the official site how it works by using native javascript driver
Example: Use nested field syntax to sort on fields from subdocuments. (You can also
create indexes on nested fields using this syntax with indexCreate.)
r.table('user').orderBy(r.row('group')('id')).run(conn, callback)
But it is not really clear how to transform it to Go.
Any idea how to make it work?
You can use a function, something like this:
db.Table("someTable").OrderBy(func(row Term) Term {
return row.Field("time").Field("begin")
}).Run(session).All(&result)

Elasticsearch aggregation on object

How do I can run an aggregation query only on object property, but get all properties in result? e.g. I want to get [{'doc_count': 1, 'key': {'id': 1, 'name': 'tag name'}}], but got [{'doc_count': 1, 'key': '1'] instead. Aggregation on field 'tags' returns zero results.
Mapping:
{
"test": {
"properties" : {
"tags" : {
"type" : "object",
"properties": {
"id" : {"type": "string", "index": "not_analyzed"},
"name" : {"type": "string", "index": "not_analyzed", "enabled": false}
}
}
}
}
}
Aggregation query: (returns only IDs as expected, but how can I get ID & name pairs in results?)
'aggregations': {
'tags': {
'terms': {
'field': 'tags.id',
'order': {'_count': 'desc'},
},
}
}
EDIT:
Got ID & Name by aggregating on "script": "_source.tags" but still looking for faster solution.
you can use a script if you want, e.g.
"terms":{"script":"doc['tags.id'].value + '|' + doc['tags.name'].value"}
for each created bucket you will get a key with the values of the fields that you have included in your script. To be honest though, the purpose of aggregations is not to return full docs back, but to do calculations on groups of documents (buckets) and return the results, e.g. sums and distinct values. What you actually doing with your query is that you create buckets based on the field tags.id.
Keep in mind that the key on the result will include both values separated with a '|' so you might have to manipulate its value to extract all the information that you need.
It's also possible to nest aggregation, you could aggregate by id, then by name.
Additional information, the answer above (cpard's one) works perfectly with nested object. Maybe the weird results that you got are from the fact that you are using object and not nested object.
The difference between these types is that nested object keeps the internal relation between the element in an object. That is why "terms":{"script":"doc['tags.id'].value + '|' + doc['tags.name'].value"} make sense. If you use object type, elasticsearch doesn't know which tags.name are with which tags.id.
For more detail:
https://www.elastic.co/blog/managing-relations-inside-elasticsearch

How to remove a key from a RethinkDB document?

I'm trying to remove a key from a RethinkDB document.
My approaches (which didn't work):
r.db('db').table('user').replace(function(row){delete row["key"]; return row})
Other approach:
r.db('db').table('user').update({key: null})
This one just sets row.key = null (which looks reasonable).
Examples tested on rethinkdb data explorer through web UI.
Here's the relevant example from the documentation on RethinkDB's website: http://rethinkdb.com/docs/cookbook/python/#removing-a-field-from-a-document
To remove a field from all documents in a table, you need to use replace to update the document to not include the desired field (using without):
r.db('db').table('user').replace(r.row.without('key'))
To remove the field from one specific document in the table:
r.db('db').table('user').get('id').replace(r.row.without('key'))
You can change the selection of documents to update by using any of the selectors in the API (http://rethinkdb.com/api/), e.g. db, table, get, get_all, between, filter.
You can use replace with without:
r.db('db').table('user').replace(r.row.without('key'))
You do not need to use replace to update the entire document.
Here is the relevant documentation: ReQL command: literal
Assume your user document looks like this:
{
"id": 1,
"name": "Alice",
"data": {
"age": 19,
"city": "Dallas",
"job": "Engineer"
}
}
And you want to remove age from the data property. Normally, update will just merge your new data with the old data. r.literal can be used to treat the data object as a single unit.
r.table('users').get(1).update({ data: r.literal({ age: 19, job: 'Engineer' }) }).run(conn, callback)
// Result passed to callback
{
"id": 1,
"name": "Alice",
"data": {
"age": 19,
"job": "Engineer"
}
}
or
r.table('users').get(1).update({ data: { city: r.literal() } }).run(conn, callback)
// Result passed to callback
{
"id": 1,
"name": "Alice",
"data": {
"age": 19,
"job": "Engineer"
}
}

Resources