Related to Strapi forum -> https://forum.strapi.io/t/resolved-cant-get-the-data-from-nested-components-within-dynamic-zones/15896/3
Is there any elastic way to request dynamic zones via Graphql without explicitly defining each of them in a query?
For example, I DON’T want to do something like this:
query Homepage {
components {
... on SliderComponent {
image
text
}
... on ParagraphComponent {
title
description
}
// and so on...
}
}
Instead, somehow, I’d like to be able to get all of the dynamic zones without querying them separately.
So ideally would be something like this:
query Homepage {
components
}
and that would return all of the possible dynamic zone components with their nested fields.
The best scenario is to query all the nested data within the dynamic zones using something like in RestApi http://localhost:1337/api/pages?populate[dynamiczones]
NOTE: I know that the queries above are not correct, but this is just an idea of the query shape.
Related
Here's the gist of a GraphQL query to ask for the events of various sports.
query Sports {
sports {
name
events {
name
}
}
}
We might get back something like:
Rugby
Six Nations
Rugby Union
Football
World Cup
Europe League
...
In this situation, it's possible for any sport to have an empty array of events. Is there something I can place in to a query to require that any array should have at least 1 element? Or do I need to implement filtering on clients if I want to prevent this being seen?
Beyond requesting specific fields, GraphQL does not have any baked-in means of filtering or reducing the results of a query. Any filtering, sorting, etc. has to be implemented when creating the schema for the endpoint.
You would have to consult the documentation for the endpoint you're using (or run an introspection query) to determine if there are any argument that can be passed to the sports field to prevent sports without events from being returned by the server.
In graphCMS we can use keyword "where".
I don't know if this feature is available in the standard API.
query Sports {
sports(where: {events_some: {}}) {
name
events {
name
}
}
}
A somewhat similar question has been asked here but there's no answer for that yet. That question relates to an older version of Kibana so I hope you can help me.
I'm trying to setup some predefined queries in the Kibana dashboard. I'm using Kibana 5.1. The purpose of those queries is filtering some logs based on multiple different parameters.
Let's see a query I'd like to execute:
{
"index": "${index_name}",
"query": {
"query_string": {
"query": "message:(+\"${LOG_LEVEL}\")",
"analyze_wildcard": true
}
}
}
I know I can query directly in the dashboard something like "message:(+"ERROR")" and manually change the ERROR to WARN for example, but I don't want that - imagine that this query might be more complex and contain multiple fields.
Note that the data stored in the message is not structured - think of the message as a whole log line. This means I don't have fields like LOG_LEVEL which I could filter directly.
Is there any way I can set the index_name and LOG_LEVEL dynamically from the Kibana Discover dashboard?
You should go to discover, open one document and click over this button in any of the fields. After this, a filter will appear under the search bar and you can edit it and put any custom query. If you want add more filters with more custom queries you can repeat the same action with a different document or field or you can do to Settings (or Management), Saved Objects, go to the Search you saved and to the JSON representation and copy and paste the elements inside the filter array field as many times you want.
And remember that in order to apply one of the filters, you probably should disable the enabled ones (otherwise it will filter by all the enabled filters in your dashboard).
Using Relay + GraphQL (graphql-relay-js) connections and trying to determine the best way to optimize queries to the data source etc.
Everything is working, though inefficient when connection results are sliced. In the below query example, the resolver on item will obtain 200+ records for sale 727506341339, when in reality we only need 1 to be returned.
I should note that in order to fulfill this request we actually make two db queries:
1. Obtain all items ids associated with a sale
2. Obtain item data for each item id.
In testing and reviewing of the graphql-relay-js src, it looks like the slice happens on the final connection resolver.
Is there a method provided, short of nesting connections or mutating the sliced results of connectionFromArray, that would allow us to slice the results provided to the connection (item ids) and then in the connection resolver fetch the item details against the already sliced id result set? This would optimize the second query so we would only need to query for 1 items details, not all items...
Obviously we can implement something custom or nest connections, though it seems this is something that would be avail, thus I feel like I am missing something here...
Example Query:
query ItemBySaleQuery {
viewer {
item (sale: 727506341339) {
items (first:1){
edges {
node {
dateDisplay,
title
}
}
}
}
}
}
Unfortunately the solution is not documented in the graphql-relay-js lib...
Connections can use resolveNode functions to work directly on an edge node. Example: https://github.com/graphql/graphql-relay-js/blob/997e06993ed04bfc38ef4809a645d12c27c321b8/src/connection/tests/connection.js#L64
I have a use-case, where I have got a set of predefined fields and also need to support adding dynamic fields to ElasticSearch with some basic searching on them. I am able to achieve this using dynamic template mapping. However, the frequency of adding such dynamic fields is quite high.
Consider the this ES document for the Event type:
{
"name":"Youth Conference",
"venue":"Ahmedabad",
"date":"10/01/2015",
"organizer":"Invincible",
"extensions":{
"about": {
"vision":"Visualizes the image of an ideal Country. ",
"mission":"Encapsulates the gravity of the top reformative solutions for betterment of Country."
}
// Any thing can go here..
}
}
In the example above, each event document may have any unknown/new fields. Hence, for every such new dynamic field introduced, ES will update the mapping of the type. My concern is what is the cost of adding new field mapping in the existing type?
I am planning to separate out all dynamic mappings(inside extensions) from Event type by introducing another type, say EventExtensions and using parent/child relationship to map it with Event type. I believe this may limit the cost(if any) of adding dynamic fields frequently to the type. However, to my knowledge, using parent/child relationship will need more memory.
The first thing to remember here is that field is per index and not per type.
So wherever you add new fields , it would be made in the same index. Be it , in another type or as parent or child.
So decoupling the new fields to another type but same index is not going to make any change.
Second field addition is not that very expensive thing. I know people who uses 1000 of fields and are good with it. That being said , there should be a tab on number of field so that it wont go out to crazy numbers.
Here we have multiple approaches to solve the problem
1) Lets assume that the new field data need not be exactly searchable. In this case , you can deserialize the entire JSON as a string and add it to a field. Also make sure this field is not indexed. This way you can search based on other fields but then on retrieval of the document , get the information that was deserialized.
2) Lets say the new field looks like this
{
"newInfo1" : "log Of Info",
"newInfo2" : "A lot more info"
}
Instead of this , you can use
{
"newInfo" : [
{
"fieldName" : "newInfo1",
"fieldValue" : "log Of Info"
},
{
"fieldName" : "newInfo2",
"fieldValue" : "A lot more info"
}
]
}
This way , fields wont increase. But then to make field level specific search , like give me all documents with filedName as newInfo2 and having the word more in it , you will need to make newInfo field nested.
Hope this helps.
I'm looking to search for a particular JSON document in a bucket and I don't know its document ID, all I know is the value of one of the sub-keys. I've looked through the API documentation but still confused when it comes to my particular use case:
In mongo I can do a dynamic query like:
bucket.get({ "name" : "some-arbritrary-name-here" })
With couchbase I'm under the impression that you need to create an index (for example on the name property) and use startKey / endKey but this feels wrong - could you still end up with multiple documents being returned? Would be nice to be able to pass a parameter to the view that an exact match could be performed on. Also how would we handle multi-dimensional searches? i.e. name and category.
I'd like to do as much of the filtering as possible on the couchbase instance and ideally narrow it down to one record rather than having to filter when it comes back to the App Tier. Something like passing a dynamic value to the mapping function and only emitting documents that match.
I know you can use LINQ with couchbase to filter but if I've read the docs correctly this filtering is still done client-side but at least if we could narrow down the returned dataset to a sensible subset, client-side filtering wouldn't be such a big deal.
Cheers
So you are correct on one point, you need to create a view (an index indeed) to be able to query on on the content of the JSON document.
So in you case you have to create a view with this kind of code:
function (doc, meta) {
if (doc.type == "youtype") { // just a good practice to type the doc
emit(doc.name);
}
}
So this will create a index - distributed on all the nodes of your cluster - that you can now use in your application. You can point to a specific value using the "key" parameter