Understanding GraphQL Operations Fields and Directives - graphql

I'm a little confused about the difference between a graphql operation, fields and applying directives on those.
Eg:
extend type Query {
"""The Hello <Name>! query."""
getABCServiceHelloNameInput(input: ABCServiceHelloNameInput): String!
"""The Hello World! query."""
getABCServiceHelloWorld: String! #authorized(key: "value")
}
directive #authorized(key: String!) on FIELD_DEFINITION
In this case are getABCServiceHelloNameInput and getABCServiceHelloWorld fields inside the query type?
Or are they operations? If not, is the query an operation?
Let's say I want to create a directive that allows only authorized people to make a particular query. I see that directives can only be applied to
fields. Is it okay to put the authorized directive the way I have?

Related

Is it possible to query by field data type in Elasticsearch?

I am needing to do a query in Elasticsearch by field data type. I have not been successful in creating that query. I want to be able to {1) specify the type I want to search for in the query, i.e. all fields of {"type"="boolean"}, and also, (2) get the field and see what the type is for that field.
Reason is to check that the field is designated correctly. Let's say I inserted the following data into this index and fields and I now want to see what the data types of those fields are programmatically. How would I query that?
POST /index_name1/_doc/
{
"field1":"hello_field_2",
"field2":"123456.54321",
"field3.field4": false,
"field3.field5.field10":"POINT(-117.918976 33.812511)",
"field3.field5.field8": "field_of_dragons",
"field3.field5.field9": "2022-05-26T07:47:26.133275Z"
}
I have tried:
GET /index_name1/_search
{
"query":{
"wildcard":{
"field3.field4":{ "type":"*"}
}
}
}
That gives [wildcard] query does not support [type].
I've tried many other queries and searched the documentation and threads, but can't find anything that will do this. It has got to be possible, right?

How elasticsearch updateByQuery syntax works

I've been working with Elasticsearch for some days. As i'm creating a CRUD, I've come across the updateByQuery method. I'm working with nestjs, and the way that I'm updating a field is:
await this.elasticSearch.updateByQuery(
{
index: 'my_index_user',
body:{
query:{
match:{
name: 'user_name',
}
},
script: {
inline : 'ctx._source.name = "new_user_name"'
}
}
}
);
My question is:
Why does elasticsearch need this syntax 'ctx._source.name = "new_user_name"' to specifie what the new value of the field name should be? What is ctx._source is this context?
As mentioned in the official doc of source filtering, using this you can fetch field value in the _source (Value which sent to Elasticsearch and this is stored as it is, and doesn't go through the analysis process).
Let's take an example of text field for which standard analyzer(Default) is applied, and you store the value of foo bar in this field, Elasticsearch
breaks the value of field as it goes through the analysis process and foo and bar two tokens are stored in the inverted index of Elasticsearch, but if you want to see the original value ie foo bar, you can check the _source and get it.
Hence, it's always better to have the original value(without analysis process) to be in the _source, hence using this API, you are updating the field value there.. this also helps when you want to reindex later to new index or change the way its analyzed as you have the original value in _source.

Type of field for prefix search in Elastic Search

I'm confused on what index type I should apply for my field for prefix search, many show search_as_you_type but I think auto complete is not what I'm going for.
I have a UUID field:
id: 34y72ca1-3739-41ff-bbec-f6d17479384c
The following terms should return the doc above:
3
34
34y72ca1
34y72ca1-3739
34y72ca1-3739-41ff-bbec-f6d17479384c
Using 3739 should not return it as it doesn't start with 3739. Initially this is what I was going for but then the wildcard field is not supported by Amazon AWS, so I compromise for prefix search instead of partial search.
I tried search_as_you_type field but it doesn't return the result when I use the whole ID. Actually, my use case is when user click enter, the results will be shown, instead of real-live when they type, so if speed is compromised its OK, just that I hope for something that will be good for many rows of data.
Thanks
If you have not explicitly defined any index mapping, then you need to use id.keyword field instead of the id field for the prefix query to show the appropriate results. This uses the keyword analyzer instead of the standard analyzer
{
"query": {
"prefix": {
"id.keyword": {
"value": "34y72ca1"
}
}
}
}
Otherwise, you can modify your index mapping, by adding multi fields for id field

How to query object with contains specific key in array in Graphql

I have object to query:
query demo {
demo {
name # string
words # array of strings
}
}
I am sending this query to API, property words is array of strings.
I would like to query all object which contains hello in words (array of strings) property.
It is possible on client side using for example #include ?
And how?
The #skip and #include directives are used to explicitly skip or include a particular field based on the provided if argument. They are normally used in conjunction with variables, so that the same document can be used to effectively generate queries with different selection sets (sets of fields) based on the provided variable.
For example, you can write a single query like this
query demo($includeName: Boolean!) {
demo {
name #include(if: $includeName)
words
}
}
and just pass in a different value for includeName instead of writing two separate queries:
query demoWithName {
demo {
name
words
}
}
query demoWithoutName {
demo {
words
}
}
GraphQL provides built-in mechanism for specifying which fields to return like fragments and the #skip and #include directives. It does not, however, have any built-in mechanism for filtering (or, for that matter, sorting, pagination, etc.). The server has to implement a way to filter fields by adding the appropriate arguments to fields and changing the field resolution logic to account for the values of those arguments.

How to make Searchkick/Elasticsearch where clause case insensitive?

I'm using ElasticSearch with Ruby (Searchkick). At the moment, by default the where filter is a case sensitive.
I'm using ElasticSearch with my EquityContract model, so once I search for Food I get different results to search for food.
[21] pry(main)> EquityContract.search('*', {:load=>false, :where=>{:industry=>"FOOD"}, limit:1})
effective_url=http://127.0.0.1:9200/equity_contracts_development/_search response_code=200 return_code=ok total_time=0.002279
EquityContract Search (5.9ms) curl http://127.0.0.1:9200/equity_contracts_development/_search?pretty -d '{"query":{"filtered":{"query":{"match_all":{}},"filter":{"and":[{"term":{"industry":"FOOD"}}]}}},"size":1,"from":0}'
=> #<Searchkick::Results:0x0000010b65c5c8
#klass=
EquityContract(id: integer, ticker: text, name: string, country: string, currency: string, instrument: string),
#options=
{:page=>1,
:per_page=>1,
:padding=>0,
:load=>false,
:includes=>nil,
:json=>false,
:match_suffix=>"analyzed",
:highlighted_fields=>[]},
#response=
{"took"=>1,
"timed_out"=>false,
"_shards"=>{"total"=>5, "successful"=>5, "failed"=>0},
"hits"=>{"total"=>0, "max_score"=>nil, "hits"=>[]}}>
While I get some results when I do the same with Food:
[23] pry(main)> EquityContract.search('*', {:load=>false, :where=>{:industry=>"Food"}, limit:1})
ETHON: performed EASY effective_url=http://127.0.0.1:9200/equity_contracts_development/_search response_code=200 return_code=ok total_time=0.002795
EquityContract Search (7.5ms) curl http://127.0.0.1:9200/equity_contracts_development/_search?pretty -d '{"query":{"filtered":{"query":{"match_all":{}},"filter":{"and":[{"term":{"industry":"Food"}}]}}},"size":1,"from":0}'
=> #<Searchkick::Results:0x000001112d1880
#klass=
EquityContract(id: integer, ticker: text, name: string, country: string, currency: string, instrument: string),
#options=
{:page=>1,
:per_page=>1,
:padding=>0,
:load=>false,
:includes=>nil,
:json=>false,
:match_suffix=>"analyzed",
:highlighted_fields=>[]},
#response=
{"took"=>1,
"timed_out"=>false,
"_shards"=>{"total"=>5, "successful"=>5, "failed"=>0},
"hits"=>
{"total"=>73,
"max_score"=>1.0,
"hits"=>
[{"_index"=>"equity_contracts_development_20160320195353552",
"_type"=>"equity_contract",
"_id"=>"1181",
"_score"=>1.0,
"_source"=>
{"name"=>"Some name",
"ticker"=>"some ticker",
"country"=>"SA",
How can I change this to make it more case-insensitive so I'll het the same results for both?
I saw Searchkiq generates term query, but what you need is a full-text query. I'm not familiar with Searchkiq so I can't tell you how.
According to the documentation on ElasticSearch official website,
Term query
Queries like the term or fuzz queries are low-level queries that have no analysis phase. They operate on a single term. A term query for the term Foo looks for that exact term in the inverted index and calculates the TF/IDF relevance _score for each document that contains the term.
Full-text query
Queries like the match or query_string queries are high-level queries that understand the mapping of a field ... If you query a full-text (analyzed) field, they will first pass the query string through the appropriate analyzer to produce the list of terms to be queried.

Resources