Reducing duplication for JSON test input in RSpec - ruby

I'm working on an application that reads JSON content from files and uses them to produce output. I'm testing with RSpec, and my specs are littered with JSON literal content all over the place. There's a ton of duplication, the files are big and hard to read, and it's getting to the point where it's so painful to add new cases, it's discouraging me from covering the corner cases.
Is there a good strategy for me to reuse large sections of JSON in my specs? I'd like to store the JSON somewhere that's not in the spec file, so I can focus on the test logic in the specs, and just understand which example JSON I'm using.
I understand that if the tests are hard to write, I may need to refactor the application, but until I can get the time to do that, I need to cover these test cases.
Below is one modified example from the application. I have to load many different JSON formatted strings like this, many are considerably larger and more complex:
RSpec.describe DataGenerator do
describe "#create_data" do
let(:input){
'{ "schema": "TEST_SCHEMA",
"tables": [
{ "name": "CASE_INFORMATION",
"rows": 1,
"columns": [
{ "name": "case_location_id", "type": "integer", "initial_value": "10000", "strategy": "next" },
{ "name": "id", "type": "integer", "delete_key": true, "initial_value": "10000", "strategy": "next" }
]
}
]
}'
}
it "generates the correct number of tables" do
generator = DataGenerator.new(input)
expect(generator.tables.size).to eq 1
end
end
end

We had a very same problem. We solved it by creating following helpers:
module JsonHelper
def get_json(name)
File.read(Rails.root.join 'spec', 'fixtures', 'json', "#{name}.json")
end
end
We moved all the json into files in spec/fixtures/json folder. Now you will eb able to use it as:
include JsonHelper
let(:input){ get_json :create_data_input }
Naturally you can tweak it as mach as you like/need. For example we were stubbing external services json responses, so we created get_service_response(service_name, request_name, response_type) helper. It is much more readable now when we use get_service_response('cdl', 'reg_lookup', 'invalid_reg')
assuming you put your json into 'create_data_input`

Related

Using a Power Automate flow, how do I convert JSON array to a delimited string?

In Power Automate I am calling an API which returns this JSON:
{
"status":"200",
"Suburbs":[
{
"ID":"1000",
"Name":"CONCORD WEST",
"Postcode":"2138"
},
{
"ID":"1001",
"Name":"LIBERTY GROVE",
"Postcode":"2138"
},
{
"ID":"1002",
"Name":"RHODES",
"Postcode":"2138"
},
{
"ID":"3891",
"Name":"UHRS POINT",
"Postcode":"2138"
},
{
"ID":"1003",
"Name":"YARALLA",
"Postcode":"2138"
}
]
}
Using PA actions, how do I convert this JSON to a String variable that looks like this?:
"CONCORD WEST, LIBERTY GROVE, RHODES, UHRS POINT, YARALLA"
I figured out how to do this. I prefer not to use complex code-style expressions in Power Automate flows as I think they are hard to understand and hard to maintain so used standard PA actions where I could.
I parsed the JSON, then used "Select" to pick out the suburb names, then used concat() within a "for each" loop through the Suburbs array. I think that Compose could probably be used in the place of the concat() but stopped investigating once I'd found this solution.

How can I two-phase split large Json File on NiFi

I'm using NiFi for recover and put to Kafka many data. I'm actually in test phase and i'm using a large Json file.
My Json file countains 500K recordings.
Actually, I have a processor getFile for get the file and a SplitJson.
JsonPath Expression : $..posts.*
This configuration works with little file that countain 50K recordings but for large files, she crashes.
My Json file looks like that, with the 500K registeries in "posts":[]
{
"meta":{
"requestid":"request1000",
"http_code":200,
"network":"twitter",
"query_type":"realtime",
"limit":10,
"page":0
},
"posts":[
{
"network":"twitter",
"posted":"posted1",
"postid":"id1",
"text":"text1",
"lang":"lang1",
"type":"type1",
"sentiment":"sentiment1",
"url":"url1"
},
{
"network":"twitter",
"posted":"posted2",
"postid":"id2",
"text":"text2",
"lang":"lang2",
"type":"type2",
"sentiment":"sentiment2",
"url":"url2"
}
]
}
I read some documentations for this problem but, topics are for text file and speakers propose to link many SplitText for split progressively the file. With a rigide structure like my Json, I don't understand how I can do that.
I'm looking for a solution that she makes the job on 500K recordings well.
Unfortunately I think this case (large array inside a record) is not handled very well right now...
SplitJson requires the entire flow file to be read into memory, and it also doesn't have an outgoing split size. So this won't work.
SplitRecord generally would be the correct solution, but currently there are two JSON record readers - JsonTreeReader and JsonPathReader. Both of these stream records, but the issue here is there is only one huge record, so they will each read the entire document into memory.
There have been a couple of efforts around this specific problem, but unfortunately none of them have made it into a release.
This PR which is now closed had added a new JSON record reader which could stream records starting from a JSON path, which in your case could be $.posts:
https://github.com/apache/nifi/pull/3222
With that reader you wouldn't even do a split, you would just send the flow file to PublishKafkaRecord_2_0 (or whichever appropriate version of PublishKafkaRecord), and it would read each record and publish to Kafka.
There is also an open PR for a new SelectJson processor that looks like it could potentially help:
https://github.com/apache/nifi/pull/3455
Try using SplitRecord processor in NiFi.
Define Record Reader/Writer controller services in SplitRecord processor.
Then configure Records Per Split to 1 and use Splits relationship for further processing.
(OR)
if you want to flatten and fork the record then use ForkRecord processor in NiFi.
For usage refer to this link.
I had the same issue with json and used to write streaming parser
Use ExeuteGroovyScript processor with
the following code.
It should split large incoming file to small ones:
#Grab(group='acme.groovy', module='acmejson', version='20200120')
import groovyx.acme.json.AcmeJsonParser
import groovyx.acme.json.AcmeJsonOutput
def ff=session.get()
if(!ff)return
def objMeta=null
def count=0
ff.read().withReader("UTF-8"){reader->
new AcmeJsonParser().withFilter{
onValue('$.meta'){
//just remember it to use later
objMeta=it
}
onValue('$.posts.[*]'){objPost->
def ffOut = ff.clone(false) //clone without content
ffOut.post_index=count //add attribite with index
//write small json
ffOut.write("UTF-8"){writer->
AcmeJsonOutput.writeJson([meta:objMeta, post:objPost], writer, true)
}
REL_SUCCESS << ffOut //transfer to success
count++
}
}.parse(reader)
}
ff.remove()
output file example:
{
"meta": {
"requestid": "request1000",
"http_code": 200,
"network": "twitter",
"query_type": "realtime",
"limit": 10,
"page": 0
},
"post": {
"network": "twitter",
"posted": "posted11",
"postid": "id11",
"text": "text11",
"lang": "lang11",
"type": "type11",
"sentiment": "sentiment11",
"url": "url11"
}
}

Ruby Airbone array testing is not working as expected

I have the json below
{
"menu": {
"sections": [
{
"type": 4,
"frames": [
{
"itens": []
}
],
"order": 0
},
{
"type": 4,
"frames": [
{
"itens": [
{
"id": "1719016",
"type": 0,
"free": false
}
]
}
],
"order": 1
}
]
}
}
and the test below that may check if all json itens in array itens has an ID property:
expect_json_keys('menu.sections.0.frames.*.itens.*', :id)
The problem is that this test runs fine. But should fail.
My test only fail when I change my expectations to that:
expect_json_keys('menu.sections.0.frames.*.itens.0', :id)
Why this test is succesful instead of fail when using itens.*
I reproduced your problem and tried to debug a bit.
I see this airborne gem for the first time (so take the following with a grain of salt), but I think the problem hides in the airborne implementation itself, here, to be more precise: https://github.com/brooklynDev/airborne/blob/master/lib/airborne/path_matcher.rb#L82
This line is intended to run expectation block (this one in this particular case) for each item matching the wildcarded segment, but for an empty array it simply does nothing. No expectations run - no failures.
So it's not something wrong in your tests code, it's about the gem itself.
As a kind of workaround, you could try smth. like the following:
expect_json_types('menu.sections.0.frames.*.itens', :array_of_objects) # <= add this
expect_json_keys('menu.sections.0.frames.*.itens.*', :id)
e.g. testing the type of the value before testing the value itself - in this case it fails with Expected array_of_objects got Array instead
Thank you very much #konstantin-strukov. This solution works fine for this test case.
But in some test cases I still have to write some extra code.
The expectation you´ve writen fails for this json http://www.mocky.io/v2/5c827f26310000e8421d1e83. OK, I have a test case where it should really fail. I´ll use your solution in a lot of use cases. Thank you again.
But I have some test cases that shouldn´t fail if I have at least one filled itens property (http://www.mocky.io/v2/5c827f26310000e8421d1e83). expect_json_keys('menu.sections.0.frames.*.itens.?', :id) should be sufficient but it doesn´t because it works using itens.* or itens.?. I´ve tried to fit your solution in these test cases but it didn´t work as expected.

How to assert a JSON response which have results in random order every time in JMeter?

I am using JSON Assertion to assert if a JSON path exists. Suppose I have a JSON response of an array of 'rooms' that 'contains' an array of cabinets, just like the following example
"rooms":
[
{
"cabinets":
[
{
"id":"HFXXXX",
"locationid":null,
"name":"HFXXXX",
"type":"Hosp"
},
{
"id":"HFYYYY",
"locationid":null,
"name":"HFYYYY",
"type":"Hosp"
},
{
"id":"HFZZZZ",
"locationid":null,
"name":"HFZZZZ",
"type":"Hosp"
}
],
"hasMap":false,
"id":"2",
"map":
{
"h":null,
"w":null,
"x":null,
"y":null
},
"name":"Fantastic Room#3"
}
],
[
{ "cabinets":
[
{
"id":"HFBBBB",
"locationid":null,
"name":"HFBBBB",
"type":"Hosp"
}
],
"hasMap":false,
"id":"3",
"map":
{
"h":null,
"w":null,
"x":null,
"y":null
},
"name":"BallRoom #4"
}
]
I want to Make sure that the 'id' of all the cabinets are correct, therefore I define the JSON path as rooms[*].cabinets[*].id and expect the value to be ["HFXXXX","HFYYYY","HFZZZZ","HFBBBB"]
This works perfectly except that sometimes the values are returned in a different order["HFBBBB", "HFXXX","HFYYYY","HFZZZZ"] instead of ["HFXXXX","HFYYYY","HFZZZZ","HFBBBB"], hence the assertion will fail. The problem is with the order of the returned array and not the values themselves.
Is there a way to sort the order of a response before Asserting and keep using the JSON assertion? or the only way of doing this is extracting the value i want to assert against and use it in JSR223 Assertion (groovy or javascript)?
if that is the case can you show me an example of how I could do it in JSR223 plugin.
I would recommend using a dedicated library, for instance JSONAssert, this way you will not have to reinvent the wheel and can compare 2 JSON objects in a single line of code
Download jsonassert-x.x.x.jar and put it somewhere to JMeter Classpath
Download suitable version of JSON in Java library and put it to JMeter Classpath as well. If you're uncertain regarding what is "JMeter Classpath" just drop the .jars to "lib" folder of your JMeter installation
Restart JMeter so it would be able to load the new libraries
Add JSR223 Assertion as a child of the request which returns the above JSON
Put the following code into "Script" area:
def expected = vars.get('expected')
def actual = prev.getResponseDataAsString()
org.skyscreamer.jsonassert.JSONAssert.assertEquals(expected, actual, false)
It will compare the response of the parent sampler with the contents of ${expected} JMeter Variable, the order of elements, presence of new lines, formatting do not matter, it compares only keys and values
In case of mismatch you will have the error message stating that as the Assertion Result and the full debugging output will be available in STDOUT (console where you started JMeter from)

Convert string to hashes and output in json format in ruby

I have a string object which is returned from the controller like below.
details = "{"name"=>"David", "age"=>"12", "emp_id"=>"E009", "exp"=>"10",
"company"=>"Starlink"}"
So the details.class would be String.
I need to convert it as Hash and output in Json format.So the output would be in below
format. I know that using eval method it can be done. But I think there will be security issues for it. So please suggest the best way to do it.
{
"name":"David",
"age":"12",
"emp_id":"E009",
"exp":"10",
"company":"Starlink"
}
How do I achieve it. Please help
It looks like you should go to your API vendor, and tell him he has a bug, since Hash.inspect is not a valid serialization, as it is not standard, and may not always be reversible.
If what you get is in the form above though, you can treat it as a JSON after running gsub on it:
formatted_details = JSON.pretty_generate(JSON.parse(details.gsub('=>', ':')))
puts formatted_details
# => {
"name": "David",
"age": "12",
"emp_id": "E009",
"exp": "10",
"company": "Starlink"
}

Resources