Ruby Airbone array testing is not working as expected - ruby

I have the json below
{
"menu": {
"sections": [
{
"type": 4,
"frames": [
{
"itens": []
}
],
"order": 0
},
{
"type": 4,
"frames": [
{
"itens": [
{
"id": "1719016",
"type": 0,
"free": false
}
]
}
],
"order": 1
}
]
}
}
and the test below that may check if all json itens in array itens has an ID property:
expect_json_keys('menu.sections.0.frames.*.itens.*', :id)
The problem is that this test runs fine. But should fail.
My test only fail when I change my expectations to that:
expect_json_keys('menu.sections.0.frames.*.itens.0', :id)
Why this test is succesful instead of fail when using itens.*

I reproduced your problem and tried to debug a bit.
I see this airborne gem for the first time (so take the following with a grain of salt), but I think the problem hides in the airborne implementation itself, here, to be more precise: https://github.com/brooklynDev/airborne/blob/master/lib/airborne/path_matcher.rb#L82
This line is intended to run expectation block (this one in this particular case) for each item matching the wildcarded segment, but for an empty array it simply does nothing. No expectations run - no failures.
So it's not something wrong in your tests code, it's about the gem itself.
As a kind of workaround, you could try smth. like the following:
expect_json_types('menu.sections.0.frames.*.itens', :array_of_objects) # <= add this
expect_json_keys('menu.sections.0.frames.*.itens.*', :id)
e.g. testing the type of the value before testing the value itself - in this case it fails with Expected array_of_objects got Array instead

Thank you very much #konstantin-strukov. This solution works fine for this test case.
But in some test cases I still have to write some extra code.
The expectation you´ve writen fails for this json http://www.mocky.io/v2/5c827f26310000e8421d1e83. OK, I have a test case where it should really fail. I´ll use your solution in a lot of use cases. Thank you again.
But I have some test cases that shouldn´t fail if I have at least one filled itens property (http://www.mocky.io/v2/5c827f26310000e8421d1e83). expect_json_keys('menu.sections.0.frames.*.itens.?', :id) should be sufficient but it doesn´t because it works using itens.* or itens.?. I´ve tried to fit your solution in these test cases but it didn´t work as expected.

Related

Graphene Django disable suggestions "Did you mean ..."

When you post an query with syntax errors graphql/graphene makes suggestions to you. By example, sending "i", it suggest "ID".
query{
users{
i
}
}
{
"errors": [
{
"message": "Cannot query field \"i\" on type \"User\". Did you mean \"id\"?",
"locations": [
{
"line": 5,
"column": 9
}
]
}
]
}
Can suggestions be disabled?
More info:
Syntax analysis who add suggestions is executed before the middlewares.
Apparently suggestions are made by ScalarLeafsRule class.
Ok, people in graphql-core github repo is awesome, they helped to me to solve this.
So graphql-core has two relevant versions, 3 (current) and 2.3.2 (legacy).
For graphql-core 3, quoting to Cito
Ok, if you want to keep it closed and also disable introspection it makes a little more sense. I suggest you simply set graphql.pyutils.did_you_mean.MAX_LENGTH = 0. I just commited a small change ffdf1b3 that makes this work a bit better.
You can also ask over at https://github.com/graphql/graphql-js/issues if they want to add some functionality to support your use case. From there it would be ported back here.
For legacy version:
from graphql.validation import rules
def get_empty_suggested_field_names(schema, graphql_type, field_name):
return []
def get_empty_suggested_type_names(schema, output_type, field_name):
return []
rules.fields_on_correct_type.get_suggested_field_names = get_empty_suggested_field_names
rules.fields_on_correct_type.get_suggested_type_names = get_empty_suggested_type_n
You can place it on django settings file.
Please follow all the thread on https://github.com/graphql-python/graphql-core/issues/97

How to embed a syntax object in another in TextMate language definitions, tmLanguage

I am trying to support Clojure's ignore text form, #_, (a sort of comment) in VS Code, which uses tmLanguage for its grammar definitions. Since it is common to disable a block of code using #_, I want the disabled block of code to retain its syntax highlighting and just italicize it, indicating its status.
But my lack of skills using tmLanguage seems to stop me. This is one of the failing attempts (a snippet of the cson):
'comment-constants':
'begin': '#_\\s*(?=\'?#?[^\\s\',"\\(\\)\\[\\]\\{\\}]+)'
'beginCaptures':
'0':
'name': 'punctuation.definition.comment.begin.clojure'
'end': '(?=[\\s\',"\\(\\)\\[\\]\\{\\}])'
'name': 'meta.comment-expression.clojure'
'patterns':
[
{
'include': '#constants'
}
]
With constants defining some Clojure constants objects, like keyword:
'keyword':
'match': '(?<=(\\s|\\(|\\[|\\{)):[\\w\\#\\.\\-\\_\\:\\+\\=\\>\\<\\/\\!\\?\\*]+(?=(\\s|\\)|\\]|\\}|\\,))'
'name': 'constant.keyword.clojure'
What I want to happen is that the constants definitions will be used ”inside” the comment. For keywords I have this (failing) spec:
it "tokenizes keywords", ->
tests =
"meta.expression.clojure": ["(:foo)"]
"meta.map.clojure": ["{:foo}"]
"meta.vector.clojure": ["[:foo]"]
"meta.quoted-expression.clojure": ["'(:foo)", "`(:foo)"]
"meta.comment-expression.clojure": ["#_:foo"]
for metaScope, lines of tests
for line in lines
{tokens} = grammar.tokenizeLine line
expect(tokens[1]).toEqual value: ":foo", scopes: ["source.clojure", metaScope, "constant.keyword.clojure"]
(The last test in that list). It fails with this message:
Expected
{ value : ':foo',
scopes : [ 'source.clojure', 'meta.comment-expression.clojure' ] }
to equal
{ value : ':foo',
scopes : [ 'source.clojure', 'meta.comment-expression.clojure', 'constant.keyword.clojure' ] }.
Meaning I am not getting the constant.keyword.clojure scope in place and thus no keyword-colorization for me. 😢
Anyone knows how to do this?
Your keyword regex starts with a lookbehind that requires that there must be a single whitespace, (, [ or { character before keywords. The _ from #_ doesn't meet that requirement.
(?<=(\\s|\\(|\\[|\\{))
You could simply add _ to the list of allowed characters:
(?<=(\\s|\\(|\\[|\\{|_))
Note that this still wouldn't work as-is for your "#_:foo" test case because of the similar lookahead at the end. You could possibly allow $ there, make the match optional, or change the test case.

How to assert a JSON response which have results in random order every time in JMeter?

I am using JSON Assertion to assert if a JSON path exists. Suppose I have a JSON response of an array of 'rooms' that 'contains' an array of cabinets, just like the following example
"rooms":
[
{
"cabinets":
[
{
"id":"HFXXXX",
"locationid":null,
"name":"HFXXXX",
"type":"Hosp"
},
{
"id":"HFYYYY",
"locationid":null,
"name":"HFYYYY",
"type":"Hosp"
},
{
"id":"HFZZZZ",
"locationid":null,
"name":"HFZZZZ",
"type":"Hosp"
}
],
"hasMap":false,
"id":"2",
"map":
{
"h":null,
"w":null,
"x":null,
"y":null
},
"name":"Fantastic Room#3"
}
],
[
{ "cabinets":
[
{
"id":"HFBBBB",
"locationid":null,
"name":"HFBBBB",
"type":"Hosp"
}
],
"hasMap":false,
"id":"3",
"map":
{
"h":null,
"w":null,
"x":null,
"y":null
},
"name":"BallRoom #4"
}
]
I want to Make sure that the 'id' of all the cabinets are correct, therefore I define the JSON path as rooms[*].cabinets[*].id and expect the value to be ["HFXXXX","HFYYYY","HFZZZZ","HFBBBB"]
This works perfectly except that sometimes the values are returned in a different order["HFBBBB", "HFXXX","HFYYYY","HFZZZZ"] instead of ["HFXXXX","HFYYYY","HFZZZZ","HFBBBB"], hence the assertion will fail. The problem is with the order of the returned array and not the values themselves.
Is there a way to sort the order of a response before Asserting and keep using the JSON assertion? or the only way of doing this is extracting the value i want to assert against and use it in JSR223 Assertion (groovy or javascript)?
if that is the case can you show me an example of how I could do it in JSR223 plugin.
I would recommend using a dedicated library, for instance JSONAssert, this way you will not have to reinvent the wheel and can compare 2 JSON objects in a single line of code
Download jsonassert-x.x.x.jar and put it somewhere to JMeter Classpath
Download suitable version of JSON in Java library and put it to JMeter Classpath as well. If you're uncertain regarding what is "JMeter Classpath" just drop the .jars to "lib" folder of your JMeter installation
Restart JMeter so it would be able to load the new libraries
Add JSR223 Assertion as a child of the request which returns the above JSON
Put the following code into "Script" area:
def expected = vars.get('expected')
def actual = prev.getResponseDataAsString()
org.skyscreamer.jsonassert.JSONAssert.assertEquals(expected, actual, false)
It will compare the response of the parent sampler with the contents of ${expected} JMeter Variable, the order of elements, presence of new lines, formatting do not matter, it compares only keys and values
In case of mismatch you will have the error message stating that as the Assertion Result and the full debugging output will be available in STDOUT (console where you started JMeter from)

Reducing duplication for JSON test input in RSpec

I'm working on an application that reads JSON content from files and uses them to produce output. I'm testing with RSpec, and my specs are littered with JSON literal content all over the place. There's a ton of duplication, the files are big and hard to read, and it's getting to the point where it's so painful to add new cases, it's discouraging me from covering the corner cases.
Is there a good strategy for me to reuse large sections of JSON in my specs? I'd like to store the JSON somewhere that's not in the spec file, so I can focus on the test logic in the specs, and just understand which example JSON I'm using.
I understand that if the tests are hard to write, I may need to refactor the application, but until I can get the time to do that, I need to cover these test cases.
Below is one modified example from the application. I have to load many different JSON formatted strings like this, many are considerably larger and more complex:
RSpec.describe DataGenerator do
describe "#create_data" do
let(:input){
'{ "schema": "TEST_SCHEMA",
"tables": [
{ "name": "CASE_INFORMATION",
"rows": 1,
"columns": [
{ "name": "case_location_id", "type": "integer", "initial_value": "10000", "strategy": "next" },
{ "name": "id", "type": "integer", "delete_key": true, "initial_value": "10000", "strategy": "next" }
]
}
]
}'
}
it "generates the correct number of tables" do
generator = DataGenerator.new(input)
expect(generator.tables.size).to eq 1
end
end
end
We had a very same problem. We solved it by creating following helpers:
module JsonHelper
def get_json(name)
File.read(Rails.root.join 'spec', 'fixtures', 'json', "#{name}.json")
end
end
We moved all the json into files in spec/fixtures/json folder. Now you will eb able to use it as:
include JsonHelper
let(:input){ get_json :create_data_input }
Naturally you can tweak it as mach as you like/need. For example we were stubbing external services json responses, so we created get_service_response(service_name, request_name, response_type) helper. It is much more readable now when we use get_service_response('cdl', 'reg_lookup', 'invalid_reg')
assuming you put your json into 'create_data_input`

no implicit conversion from nil to integer - when trying to add anything to array

I'm trying to build a fairly complex hash and I am strangely getting the error
no implicit conversion from nil to integer
when I use the line
manufacturer_cols << {:field => 'test'}
I use the same line later in the same loop, and it works no problem.
The entire code is
manufacturer_cols=[]
manufacturer_fields.each_with_index do |mapped_field, index|
if mapped_field.base_field_name=='exactSKU'
#this is where it is breaking, if I comment this out, all is good
manufacturer_cols << { :base_field=> 'test'}
else
#it works fine here!
manufacturer_cols << { :base_field=>mapped_field.base_field_name }
end
end
------- value of manufacturer_fields --------
[{"base_field":{"base_field_name":"Category","id":1,"name":"Category"}},{"base_field":{"base_field_name":"Description","id":3,"name":"Short_Description"}},{"base_field":{"base_field_name":"exactSKU","id":5,"name":"Item_SKU"}},{"base_field":{"base_field_name":"Markup","id":25,"name":"Retail_Price"}},{"base_field":{"base_field_name":"Family","id":26,"name":"Theme"}}]
Implicit Conversion Errors Explained
I'm not sure precisely why your code is getting this error but I can tell you exactly what the error means, and perhaps that will help.
There are two kinds of conversions in Ruby: explicit and implicit.
Explicit conversions use the short name, like #to_s or #to_i. These are commonly defined in the core, and they are called all the time. They are for objects that are not strings or not integers, but can be converted for debugging or database translation or string interpolation or whatever.
Implicit conversions use the long name, like #to_str or #to_int. This kind of conversion is for objects that are very much like strings or integers and merely need to know when to assume the form of their alter egos. These conversions are never or almost never defined in the core. (Hal Fulton's The Ruby Way identifies Pathname as one of the classes that finds a reason to define #to_str.)
It's quite difficult to get your error, even NilClass defines explicit (short name) converters:
nil.to_i
=> 0
">>#{nil}<<" # this demonstrates nil.to_s
=> ">><<"
You can trigger it like so:
Array.new nil
TypeError: no implicit conversion from nil to integer
Therefore, your error is coming from the C code inside the Ruby interpreter. A core class, implemented in C, is being handed a nil when it expects an Integer. It may have a #to_i but it doesn't have a #to_int and so the result is the TypeError.
This seems to have been completely unrelated to anything that had anything to do with manufacturer_cols after all.
I had arrived at the manufacturer_cols bit because if I commented that out, it ran fine.
However, if I commented out the part where I ran through the csv further down the page, it ran fine also.
It turns out the error was related to retrieving attempting to append the base_field when it was nil.
I thought I could use
manufacturer_cols.each do |col|
base_value = row[col[:row_index].to_i]
if col[:merges]
col[:merges].each do |merge|
base_value += merge[:separator].to_s + row[merge[:merge_row_index]]
end
end
end
unfortunately, that caused the error. the solution was
base_value = base_value + merge[:separator].to_s + row[merge[:merge_row_index]]
I hope this helps somebody, 'cause as DigitalRoss alluded to, it was quite a wild goose chase nailing down where in the code this was being caused and why.
I got this error when parsing through an API for "tag/#{idnum}/parents"...Normally, you'd expect a response like this:
{
"parents": [
{
"id": 8,
"tag_type": "MarketTag",
"name": "internet",
"display_name": "Internet",
"angellist_url": "https://angel.co/internet",
"statistics": {
"all": {
"investor_followers": 1400,
"followers": 5078,
"startups": 13214
},
"direct": {
"investor_followers": 532,
"followers": 1832,
"startups": 495
}
}
}
],
"total": 1,
"per_page": 50,
"page": 1,
"last_page": 1
}
but when I looked up the parents of the market category "adult" (as it were), I got this
{
"parents": [ ],
"total": 0,
"per_page": 50,
"page": 1,
"last_page": 0
}
Now ruby allowed a number of interactions with this thing to occur, but eventually it threw the error about implicit conversion
parents.each do |p|
stats = p['statistics']['all']
selector << stats['investor_followers'].to_i
end
selected = selector.index(selector.max)
parents[selected]['id'] ***<--- CODE FAILED HERE
end
This was a simple fix for me.
When I was getting this error using the Scout app, one of my mapped folders was header-1, when I removed the hyphen from the folder name and made it header1, the error went away.
It didn't like the hyphen for some reason...

Resources