Efficiently check that a JSON response contains a specific element within an array - ruby

Given the JSON response:
{
"tags": [
{
"id": 81499,
"name": "sign-in"
},
{
"id": 81500,
"name": "user"
},
{
"id": 81501,
"name": "authentication"
}
]
}
Using RSpec 2, I want to verify that this response contains the tag with the name authentication. Being a fairly new to Ruby, I figured there is a more efficient way than iterating the array and checking each value of name using include? or map/collect. I could simply user a regex to check for /authentication/i but that doesn't seem like the best approach either.
This is my spec so far:
it "allows filtering" do
response = #client.story(15404)
#response.tags.
end

So, if
t = JSON.parse '{ ... }'
Then this expression will either return nil, which is false, or it will return the thing it detected, which has a boolean evaluation of true.
t['tags'].detect { |e| e['name'] == 'authentication' }
This will raise NoMethodError if there is no tags key. I think that's handled just fine in a test, but you can arrange for that case to also show up as false (i.e., nil) with:
t['tags'].to_a.detect { |e| e['name'] == 'authentication' }

Related

Match keys with sibling object JSONATA

I have an JSON object with the structure below. When looping over key_two I want to create a new object that I will return. The returned object should contain a title with the value from key_one's name where the id of key_one matches the current looped over node from key_two.
Both objects contain other keys that also will be included but the first step I can't figure out is how to grab data from a sibling object while looping and match it to the current value.
{
"key_one": [
{
"name": "some_cool_title",
"id": "value_one",
...
}
],
"key_two": [
{
"node": "value_one",
...
}
],
}
This is a good example of a 'join' operation (in SQL terms). JSONata supports this in a path expression. See https://docs.jsonata.org/path-operators#-context-variable-binding
So in your example, you could write:
key_one#$k1.key_two[node = $k1.id].{
"title": $k1.name
}
You can then add extra fields into the resulting object by referencing items from either of the original objects. E.g.:
key_one#$k1.key_two[node = $k1.id].{
"title": $k1.name,
"other_one": $k1.other_data,
"other_two": other_data
}
See https://try.jsonata.org/--2aRZvSL
I seem to have found a solution for this.
[key_two].$filter($$.key_one, function($v, $k){
$v.id = node
}).{"title": name ? name : id}
Gives:
[
{
"title": "value_one"
},
{
"title": "value_two"
},
{
"title": "value_three"
}
]
Leaving this here if someone have a similar issue in the future.

Absinthe returns an array that contains one null value instead of an empty array

I am confused by this behavior that I'm seeing with Absinthe.
For a top-level field, e.g.
field :projects, list_of(:project) do
arg :user_id, :string
resolve(&ProjectResolver.list_projects/2)
end
If ProjectResolver.list_projects/2 returns {:ok, []}, then the JSON result will correctly be
{
"data": {
"projects": []
}
}
However, for a subfield, e.g. the tags field in
object :task do
field :id, :string
# ... Other fields
field :tags, list_of(:tag) do
resolve(&TaskResolver.list_tags/3)
end
# ... Other subfields
end
If TaskResolver.list_tags/3 returns {:ok, []}, I get
{
"data": {
"task": {
"id": "ba156cde-8c5f-4806-b161-62071b0098b3",
"tags": [
null
]
}
}
}
instead of
{
"data": {
"task": {
"id": "ba156cde-8c5f-4806-b161-62071b0098b3",
"tags": []
}
}
}
which I think should be the reasonable response.
Now the non-empty array that contains one item (null) is causing headaches for me on the frontend (apollo), and I'm not sure if there's any way I can easily work around that. It would be ideal if the data returned is an empty array in the first place, and I don't see why it's not.
Immediately after posting this question I realized that it might well be that my resolver was not returning {:ok, []} after all... Indeed, it was returning {:ok [nil]} due to the Ecto query being wrong (:left_join instead of :join). That's why the returned JSON contains [null]. I just needed to fix my resolver function to actually return {:ok, []} in this case. I guess writing about an issue does help clear your thoughts on it.

How to create a HashMap with custom object as a key?

In Elasticsearch, I have an object that contains an array of objects. Each object in the array have type, id, updateTime, value fields.
My input parameter is an array that contains objects of the same type but different values and update times. Id like to update the objects with new value when they exist and create new ones when they aren't.
I'd like to use Painless script to update those but keep them distinct, as some of them may overlap. Issue is that I need to use both type and id to keep them unique. So far I've done it with bruteforce approach, nested for loop and comparing elements of both arrays, but I'm not too happy about that.
One of the ideas is to take array from source, build temporary HashMap for fast lookup, process input and later store all objects back into source.
Can I create HashMap with custom object (a class with type and id) as a key? If so, how to do it? I can't add class definition to the script.
Here's the mapping. All fields are 'disabled' as I use them only as intermidiate state and query using other fields.
{
"properties": {
"arrayOfObjects": {
"properties": {
"typ": {
"enabled": false
},
"id": {
"enabled": false
},
"value": {
"enabled": false
},
"updated": {
"enabled": false
}
}
}
}
}
Example doc.
{
"arrayOfObjects": [
{
"typ": "a",
"id": "1",
"updated": "2020-01-02T10:10:10Z",
"value": "yes"
},
{
"typ": "a",
"id": "2",
"updated": "2020-01-02T11:11:11Z",
"value": "no"
},
{
"typ": "b",
"id": "1",
"updated": "2020-01-02T11:11:11Z"
}
]
}
And finally part of the script in it's current form. The script does some other things, too, so I've stripped them out for brevity.
if (ctx._source.arrayOfObjects == null) {
ctx._source.arrayOfObjects = new ArrayList();
}
for (obj in params.inputObjects) {
def found = false;
for (existingObj in ctx._source.arrayOfObjects) {
if (obj.typ == existingObj.typ && obj.id == existingObj.id && isAfter(obj.updated, existingObj.updated)) {
existingObj.updated = obj.updated;
existingObj.value = obj.value;
found = true;
break;
}
}
if (!found) {
ctx._source.arrayOfObjects.add([
"typ": obj.typ,
"id": obj.id,
"value": params.inputValue,
"updated": obj.updated
]);
}
}
There's technically nothing suboptimal about your approach.
A HashMap could potentially save some time but since you're scripting, you're already bound to its innate inefficiencies... Btw here's how you initialize & work with HashMaps.
Another approach would be to rethink your data structure -- instead of arrays of objects use keyed objects or similar. Arrays of objects aren't great for frequent updates.
Finally a tip: you said that these fields are only used to store some intermediate state. If that weren't the case (or won't be in the future), I'd recommend using nested arrays to enable querying independently of other objects in the array.

How can I remove dot from a nested object in logstash

We have a complex object with nested fields that the field names can be dynamic and contains dot.When I try to ingest data to elasticsearch it gives me the following error
Object mapping for [x] tried to parse field [x.y] as object, but found a concrete value
One record can have key/values like a.b.c:4 and in other record it can have a.b:3. We don't have control of the source of coming data so the only option can be changing the object in logstash. Here is an example of coming object:
{
"result": "https://www.yahoo.com",
"tags": {
"url": "https://www.yahoo.com",
"projectName": "monitor",
"host": "ttt",
"dd": 12345,
"vv": "kk"
},
"timestamp": 1586599441000,
"runId": 12345,
"performance": {
"x.y.z": 31307
},
"channel": "clientperf",
"asset": {
"a.b.c": 5,
"a.b":4
}
}
as you see values inside asset and performance has dot. The fields on the roots(like runId, performance and ...) are fine. How can I resolve this either with replacing the dot in logstash or anything that doesn't give me error. I'm aware of de_dot plugin but to use it we need to specifically tell what are the name of nested fields while we cannot enforce the naming for the coming records.I also know that we probably can achieve this with ruby plugin but I have zero knowledge of ruby. Any help can be appreciated.
Could use Hash#deep_transform_keys from ActiveSupport:
class Hash
def deep_transform_keys(&block)
result = {}
each do |key, value|
result[yield(key)] = value.is_a?(Hash) ? value.deep_transform_keys(&block) : value
end
result
end
end
puts hash.deep_transform_keys { |key| key.to_s.gsub(".", "" ) }

How to read value from JSON object?

I'm trying to read individual value from be json array object to display in the page. I have tried with below code but couldn't make it. Please advise what am i doing wrong here.
Apperciate your help.
You can get the length of a JavaScript array via its property length. To access the array Reference in your object, you can use dot notation.
In combination, the following should do what you expect:
var obj = {
"Reference": [
{
"name": "xxxxxxxx",
"typeReference": {
"articulation": 0,
"locked": false,
"createdBy": {
"userName": "System",
},
"lastModifiedBy": {
"userName": "System",
},
"lastModified": 1391084398660,
"createdOn": 1391084398647,
"isSystem": true
},
...
},
...
]
};
console.log(obj.Reference.length);
In case you are actually dealing with a JSON string, not a JavaScript object, you will need to parse it first via JSON.parse().
You get the length of an array by simply access the length attribute.
For example [0,1,2,3].length === 4.
If you just want to loop through the array, use forEach or map instead of a for loop. It's safer, cleaner, less hassle and you don't meed to know the length.
E.g.
[0,1,2,3].forEach(num => console.log(num))

Resources