how to make array of objects for grape - ruby

I am building APIs using Grape Api for a rails application.
What I am trying right now is this form:
And this is output:
{
"page_score_master": {
"issue_date": "2014-06-23"
},
"press_id": "1",
"print_date": "2014-06-23",
"product_id": 1,
"pull_id": 2,
"press_run_id": 1,
"total_section": 1,
"ssa": [
{
"ss": {
"section_name": "A"
},
"ss1": {
"section_name": "B"
}
}
],
"foreman_id": 1,
"pic_id": 1,
"score_sheet_master_id": 1,
"score_sheet_sections_attributes": {
"score_sheet_id": "1"
},
"route_info": {
"options": {
"description": "create score sheet",
"params": {
"page_score_master": {
"required": true,
"type": "Hash"
},
"page_score_master[issue_date]": {
"required": true,
"type": "String"
},
"print_date": {
"required": true,
"type": "String"
},
"total_section": {
"required": true,
"type": "Integer"
},
"ssa": {
"required": false,
"type": "Array"
},
"ssa[section_name]": {
"required": false,
"type": "String"
},
"ssa[total_pages]": {
"required": false,
"type": "Integer"
},
"ssa[color_pages]": {
"required": false,
"type": "String"
},
"ssa[score_sheet_id]": {
"required": false,
"type": "Integer"
}
}
}
I have omitted some part of json to make it shorter.
What I need is to have an array or ssa but somehow unable to make it till now. It makes an array of ssa with only one object.
While in my API controller I have following code:
optional :ssa, type: Array do
requires :ss, type: Hash do
optional :section_name, type: String
optional :total_pages, type: Integer
optional :color_pages, type: String
optional :score_sheet_id, type: Integer
end
end

I think you have 2 problems here.
First one lies in you form declaration.
In the code you say you have an Array (called ssa) of Hashes (called ss).
In your form, you're sending a hash called ss1 as part of your 'ssa' Array. The ss1 hash will be ignored, thus you'll only have one 'ss' element in you array.
If you rename the ss1 into ss in your form:
ssa[][ss][section_name] A
ssa[][ss][section_name] B
you'll get to the second problem, which lies in the API controller definition:
Your controller expects a 'ssa' array that can only have one 'ss' hash element. Thus it will overwrite the first [ss][section_name].
What you want to do is declare the ssa as a an Array and remove the ss group:
requires :ssa, type: Array do
optional :section_name, type: String
optional :total_pages, type: Integer
optional :color_pages, type: String
optional :score_sheet_id, type: Integer
end
This will require an array (ssa) of hashes. You don't need to declare the ss group, it already expects an array of hashes with section_name, total_pages, etc. as keys. If ssa is not a required param, simply declare it optional, as you did in your controller.
Then, your form should look like this:
ssa[][section_name] ABC
opportunity[ssa][][total_pages] 3
ssa[][section_name] DEF
opportunity[ssa][][total_pages] 6
This will result in:
:ssa=>
[{:section_name=>"DEF",
:total_pages=>3,
:color_pages=>nil,
:score_sheet_id=>nil},
{:section_name=>"HGJK",
:total_pages=>6,
:color_pages=>nil,
:score_sheet_id=>nil}]

Related

How can i form the property in compose to return int(0) if condition is true and not return anything if condition is false?

How can i form this expression to return int value of 0 if true and don't return the property if false? warehouse event is an array and the property is inside a compose.
Expression:
if(contains(variables('WareHouseEvent'), 'OB_2910'), int(0), <not
return anything)
An alternative to the first answer is to always add and then remove it after the fact.
This is an example you can copy into your own tenant for testing.
{
"definition": {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"actions": {
"Compose_JSON_Object": {
"inputs": {
"AnotherProperty": "Another Value",
"TestProperty": "#variables('Data')"
},
"runAfter": {
"Initialize_Integer": [
"Succeeded"
]
},
"type": "Compose"
},
"Initialize_Integer": {
"inputs": {
"variables": [
{
"name": "Data",
"type": "integer",
"value": 0
}
]
},
"runAfter": {},
"type": "InitializeVariable"
},
"New_JSON_Object": {
"inputs": {
"variables": [
{
"name": "New Object",
"type": "object",
"value": "#if(equals(variables('Data'), 1), removeProperty(outputs('Compose_JSON_Object'), 'TestProperty'), outputs('Compose_JSON_Object'))"
}
]
},
"runAfter": {
"Compose_JSON_Object": [
"Succeeded"
]
},
"type": "InitializeVariable"
}
},
"contentVersion": "1.0.0.0",
"outputs": {},
"parameters": {},
"triggers": {
"manual": {
"conditions": [],
"inputs": {},
"kind": "Http",
"type": "Request"
}
}
},
"parameters": {}
}
I have an integer variable at the top which stores a value of either 1 or 0.
Then in my compose, I add that value to a property in the compose statement.
Then beneath that, I set a new variable with the (potentially) updated object using an expression to determine if the added property should be removed or not.
You'd just need to adjust the condition portion of the IF statement with your expression.
if(equals(variables('Data'), 1), removeProperty(outputs('Compose_JSON_Object'), 'TestProperty'), outputs('Compose_JSON_Object'))
The property will be removed depending on the value of the Data variable.
Removed
Retained
One of the workarounds is that you can use Condition connector when if the mentioned condition satisfies it executes true block else it executes false. From there you can use the same Compose content.
Here is the screenshot of the logic app -
output :-

JSON Schema - array with multiple different object types

I'm writing a schema using draft/2020-12 to validate an array of multiple objects. I will be using YAML for the examples for readability (and it's YAML documents that are going to be validated).
I've seen many different approaches but none seem to do a very good job for some reason.
The data to validate
pipeline:
- foo: abc
someOption: 123
- baz: def
anotherOption: 223
- foo: ghi
As you can see, the pipeline array contains objects that has completely different properties. The same object types can appear multiple times (there's 2 instances of an object containing foo, they are to be seen as the same object type).
Schema
$id: 'https://example.com/schema'
$schema: 'https://json-schema.org/draft/2020-12/schema'
type: object
properties:
pipeline:
type: array
additionalItems: false
anyOf:
- items:
type: object
additionalProperties: false
properties:
foo:
type: string
someOption:
type: integer
required: [foo]
- items:
type: object
additionalProperties: false
properties:
baz:
type: string
anotherOption:
type: integer
required: [baz]
To me, this looks correct, although it fails validation and I can't quite see why...
Even if it had worked, I'm not sure if I'm supposed to use anyOf or oneOf in this case. Is it matching across all array items or per individual array item?
You have it slightly wrong. Currently your schema says... for the pipeline array, either, all of the items must be foo or all of the items must be baz.
You need to change your location of anyOf and items...
items applies to every item in the array.
anyOf checks that any of the subschema values are valid against the instance location (in this case, each item in the array).
{
"$id": "https://example.com/schema",
"$schema": "https://json-schema.org/draft/2020-12/schema",
"type": "object",
"properties": {
"pipeline": {
"type": "array",
"items": {
"anyOf": [
{
"type": "object",
"additionalProperties": false,
"properties": {
"foo": {
"type": "string"
},
"someOption": {
"type": "integer"
}
},
"required": [
"foo"
]
}, {
"type": "object",
"additionalProperties": false,
"properties": {
"baz": {
"type": "string"
},
"anotherOption": {
"type": "integer"
}
},
"required": [
"baz"
]
}
]
}
}
}
}

How to loop through a nested array and retrieve values

I have the following array:
my_tst = [
[
{
"name": "shield",
"version": "8.6.3"
},
{
"name": "bosh-dns",
"version": "1.17.0"
},
{
"name": "nessus_agent",
"version": "1.0.24"
},
{
"name": "node-exporter",
"version": "4.2.0"
},
{
"name": "syslog",
"version": "11.6.1"
}
],
[
{
"name": "shield",
"version": "8.6.3"
},
{
"name": "bosh-dns",
"version": "1.16.0"
},
{
"name": "nessus_agent",
"version": "1.0.24"
},
{
"name": "node-exporter",
"version": "4.2.0"
},
{
"name": "syslog",
"version": "11.6.1"
}
]
]
I am trying to loop through the array and output only the values of name.
I used this loop:
my_tst["name"].each do |run|
p run
end
The loop is returning an Error:
TypeError: no implicit conversion of String into Integer
How do I output all values in the nested array?
You're trying to use [] in an array, which is meant to be used passing a numeric parameter in order to access its elements by their index. You're passing a string, which is the way you get values from hashes, and there's the problem.
You have an array of arrays containing hashes (with an interesting indentation), so in that case you need to first iterate the "main" array, to be able to get the hashes over each array.
This is one way you can achieve that:
my_tst.each_with_object([]) do |e, arr|
e.each { |f| arr << f[:name] }
end
# ["shield", "bosh-dns", "nessus_agent", "node-exporter", "syslog", "shield", "bosh-dns", "nessus_agent", "node-exporter", "syslog"]
Or:
data.flat_map do |e|
e.map { |f| f[:name] }
end
Anyway, there's going to be a nested iteration.

Is this expected Query Performance from CosmosDB for "between" queries on an integer property

I have a cosmosdb collection (sql api) that I've populated with documents representing CIDR Network Ranges.
The relevant part of each document is
{
"Network": "31.216.102.0/23",
"IPRangeStart": 534275584,
"IPRangeEnd": 534276095,
Each CIDR block has it's start and end IP addresses converted to uint and stored in hte RangeStart and RangeEnd properties.
When I run a query to search for a specific entry by it's start range, it works as expected and is quite fast.
SELECT top 1 * FROM c WHERE c.IPRangeStart = 532361216
Request Charge: 3.02 RUs
However when I introduce a between query using <= / => operators, it gets VERY expensive.
SELECT top 1 * FROM c WHERE c.IPRangeStart <= 534275590 AND c.IPRangeEnd >= 534275590
Request Change: 1647.99 RUs
I've reviewed the index setup on the collection
I've also applied 2 additional integer range indices on the collection for the two specific properties in question. Though there doesn't appear to be a way to check for progress of these indices being applied/created in the background.
Is there something obvious that I might be missing.
{
"indexingMode": "consistent",
"automatic": true,
"includedPaths": [
{
"path": "/*",
"indexes": [
{
"kind": "Range",
"dataType": "Number",
"precision": -1
},
{
"kind": "Hash",
"dataType": "String",
"precision": 3
}
]
},
{
"path": "/IPRangeStart/?",
"indexes": [
{
"kind": "Range",
"dataType": "Number",
"precision": -1
},
{
"kind": "Hash",
"dataType": "String",
"precision": 3
}
]
},
{
"path": "/IPRangEnd/?",
"indexes": [
{
"kind": "Range",
"dataType": "Number",
"precision": -1
},
{
"kind": "Hash",
"dataType": "String",
"precision": 3
}
]
}
],
"excludedPaths": []
}
Think I solved it. The problem stemmed from the fact that I had a greater than query on one property and a less than query on a different property.
It appears that cosmos was merging the full set of documents that satisfied each independent filter clause.
Since the largest CIDR range in the set was a /18 (16k address block) was able to get it working by saying.
Where start <= value
And start >= value-32786
And end >= value
And end <= value+32768

Ruby: Delete a reoccurring hash in a large nested data structure

I am trying to move data between services and need to remove a reoccurring hash from a large record that contains both hashes and arrays.
The hash to remove from every section of the record is
{
"description": "simple identifier",
"name": "id",
"type": "id"
},
Heres example data :
{"stuff": { "defs": [
{
"description": "simple identifiery",
"name": "id",
"type": "id"
},
{
"name": "aDate",
"type": "date"
},
{
"defs": [
{
"description": "simple identifier",
"name": "id",
"type": "id"
},
{
"case-sensitive": true,
"length": null,
"name": "Id",
"type": "string"
},
{
"name": "anotherDate",
"type": "dateTime"
}
],
},
{
"defs": [
{
"description": "simple identifier",
"name": "id",
"type": "id"
},
...lots more....
I created a couple recursive function to remove the element(s) but I'm left with an empty hash '{}'. I also tried to remove the parent but found that I removed the hashes parent and not the hash itself.
I'm pretty sure I could create a new hash and populate it with the data I want but there must be a way to do this.
I am not working in rails and would like to avoid using rails gems.
I figured this out by looking at the data structure closer. The elements that need to be removed are always in an array so before recursing check if the hash key/value exists and delete if so. I'm sure this could be coded better so let me know what you think.
def recursive_delete!(node, key, value)
if node.is_a?(Array)
node.delete_if { |elm| elm[key] == value }
node.each do |elm|
recursive_delete!(elm, key, value)
end
elsif node.is_a?(Hash)
node.each_value do |v|
recursive_delete!(v, key, value)
end
end
end
If you are looking for the way to delete the same hash as you have inside complex Array/Hash data structure, it's easy:
def remove_hash_from(source, hsh)
return unless source.is_a?(Hash) || source.is_a?(Array)
source.each do |*args|
if args.last == hsh
source.delete(args.first)
elsif args.last.is_a?(Hash) || args.last.is_a?(Array)
remove_hash_from(args.last, hsh)
end
end
source
end
data = [
{h: 'v',
j: [{h: 'v'},
{a: 'c'},
8,
'asdf']
},
asdf: {h: 'v', j: 'c'}
]
remove_hash_from(data, {h: 'v'})
# => [{:h=>"v", :j=>[{:a=>"c"}, 8, "asdf"]}, {:asdf=>{:h=>"v", :j=>"c"}}]
Possibly, you will need to adjust method above for your needs. But common idea is clear, I hope.

Resources