Convert string to hashes and output in json format in ruby - ruby

I have a string object which is returned from the controller like below.
details = "{"name"=>"David", "age"=>"12", "emp_id"=>"E009", "exp"=>"10",
"company"=>"Starlink"}"
So the details.class would be String.
I need to convert it as Hash and output in Json format.So the output would be in below
format. I know that using eval method it can be done. But I think there will be security issues for it. So please suggest the best way to do it.
{
"name":"David",
"age":"12",
"emp_id":"E009",
"exp":"10",
"company":"Starlink"
}
How do I achieve it. Please help

It looks like you should go to your API vendor, and tell him he has a bug, since Hash.inspect is not a valid serialization, as it is not standard, and may not always be reversible.
If what you get is in the form above though, you can treat it as a JSON after running gsub on it:
formatted_details = JSON.pretty_generate(JSON.parse(details.gsub('=>', ':')))
puts formatted_details
# => {
"name": "David",
"age": "12",
"emp_id": "E009",
"exp": "10",
"company": "Starlink"
}

Related

Using a Power Automate flow, how do I convert JSON array to a delimited string?

In Power Automate I am calling an API which returns this JSON:
{
"status":"200",
"Suburbs":[
{
"ID":"1000",
"Name":"CONCORD WEST",
"Postcode":"2138"
},
{
"ID":"1001",
"Name":"LIBERTY GROVE",
"Postcode":"2138"
},
{
"ID":"1002",
"Name":"RHODES",
"Postcode":"2138"
},
{
"ID":"3891",
"Name":"UHRS POINT",
"Postcode":"2138"
},
{
"ID":"1003",
"Name":"YARALLA",
"Postcode":"2138"
}
]
}
Using PA actions, how do I convert this JSON to a String variable that looks like this?:
"CONCORD WEST, LIBERTY GROVE, RHODES, UHRS POINT, YARALLA"
I figured out how to do this. I prefer not to use complex code-style expressions in Power Automate flows as I think they are hard to understand and hard to maintain so used standard PA actions where I could.
I parsed the JSON, then used "Select" to pick out the suburb names, then used concat() within a "for each" loop through the Suburbs array. I think that Compose could probably be used in the place of the concat() but stopped investigating once I'd found this solution.

Not able to parse string to date in logstash/elasticSearch

I had created a logstash script to read a logfile which is having various timestamp of format "2018-05-08T12:18:53.506+0530". I am trying to parse it to date using the date filter in log stash
date{
match => ["edrTimestamp","yyyy-MM-dd'T'HH:mm:ss.SSS'Z'","ISO8601"]
target => "edrTimestamp"
}
The running the above logstash script it creates a elastic search index. But still the string is not parsed to date. It is also showing date parsed exception in the index.
It creates output like this.
{
"tags": [
"_dateparsefailure"
],
"statusCode": "805",
"campaignRedemptionLimitTotal": 1000,
"edrTimestamp": "2018-05-22T16:41:25.162+0530 ",
"msisdn": "+919066231327",
"timestamp": "2018-05-22T16:41:25.122+0530",
"redempKeyword": "print1",
"campaignId": "C910101-1527004962-1582",
"category": "RedeemRequestReceived"
}
Please tell me whats wrong in the above code> I had tried many others alternative but still it is not working.
Your issue is that your timestamp has a space at the end of it "edrTimestamp": "2018-05-22T16:41:25.162+0530 ", which is causing the date parsing to fail. You need to add a:
mutate {
strip => "edrTimestamp"
}
before your date filter.
I don't think you should be escaping the Z. So you probably want something like:
yyyy-MM-dd'T'HH:mm:ss,SSS
Also you should not be using "Z" since your time is not Zulu (0 offset). You will want to contain the offset as part of the pattern. The Heroku grok debug app is useful for this.
If I pass your string
2018-05-08T12:18:53.506+0530
and use the filter %{TIMESTAMP_ISO8601} then it matches, this pattern is made up of the following sub-patterns:
TIMESTAMP_ISO8601 %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE}?

Reducing duplication for JSON test input in RSpec

I'm working on an application that reads JSON content from files and uses them to produce output. I'm testing with RSpec, and my specs are littered with JSON literal content all over the place. There's a ton of duplication, the files are big and hard to read, and it's getting to the point where it's so painful to add new cases, it's discouraging me from covering the corner cases.
Is there a good strategy for me to reuse large sections of JSON in my specs? I'd like to store the JSON somewhere that's not in the spec file, so I can focus on the test logic in the specs, and just understand which example JSON I'm using.
I understand that if the tests are hard to write, I may need to refactor the application, but until I can get the time to do that, I need to cover these test cases.
Below is one modified example from the application. I have to load many different JSON formatted strings like this, many are considerably larger and more complex:
RSpec.describe DataGenerator do
describe "#create_data" do
let(:input){
'{ "schema": "TEST_SCHEMA",
"tables": [
{ "name": "CASE_INFORMATION",
"rows": 1,
"columns": [
{ "name": "case_location_id", "type": "integer", "initial_value": "10000", "strategy": "next" },
{ "name": "id", "type": "integer", "delete_key": true, "initial_value": "10000", "strategy": "next" }
]
}
]
}'
}
it "generates the correct number of tables" do
generator = DataGenerator.new(input)
expect(generator.tables.size).to eq 1
end
end
end
We had a very same problem. We solved it by creating following helpers:
module JsonHelper
def get_json(name)
File.read(Rails.root.join 'spec', 'fixtures', 'json', "#{name}.json")
end
end
We moved all the json into files in spec/fixtures/json folder. Now you will eb able to use it as:
include JsonHelper
let(:input){ get_json :create_data_input }
Naturally you can tweak it as mach as you like/need. For example we were stubbing external services json responses, so we created get_service_response(service_name, request_name, response_type) helper. It is much more readable now when we use get_service_response('cdl', 'reg_lookup', 'invalid_reg')
assuming you put your json into 'create_data_input`

MongoDB Ruby driver typecasting while inserting document

While creating a document that is got from web interface, it does not rightly typecast integer date and other type. for example
{"_id": "<some_object_id>", "name": "hello", "age": "20", "dob": "1994-02-22"}
Since attributes are entered dynamically, their types can not be prejudged. Is there any way I can get them entered from client side, like
{"_id": "<some_object_id>", "name": "hello", "age": "$int:20", "dob": "$date:1994-02-22"}
Any help is highly appreciated.
Since you appear to be concerned about the strings that come in from a form such as a POST, the simple answer is that you cast them in Ruby.
If the field is what you expect to be a number then cast it to an int. And the same goes for dates as well.
Your mongo driver will correctly interpret these and store them as the corresponding BSON types in your MongoDB collection. The same goes in reverse, when you read collection data you will get it back cast into your native types.
"1234".to_i
Date.strptime("{ 2014, 2, 22 }", "{ %Y, %m, %d }")
But that's be basic Ruby part.
Now you could do something like you pseudo-suggested and store your information, not as native types but as strings with some form of type tagging. But see, I just don't see the point as you would have to
Detect the type at some stage and apply the tag
Live with the fact that you just ruined all the benefits of having the native types in the collection. Such as query and aggregation for date ranges and basic summing of values.
And while we seem to be going down the track of the anything type where users just arbitrarily insert data and something else has to work out what type it is, consider the following examples of MongoDB documents:
{
name: "Fred",
values: [ 1, 2, 3, 4],
}
{
name: "Sally",
values: "a"
}
So in Mongo terminology, that document structure is considered bad. Even though Mongo does have a flexible schema concept, this type of mixing will break things. So don't do it, but rather handle in the following way, which is quite acceptable even though the schema's are different:
{
name: "Fred",
values: [ 1, 2, 3, 4],
}
{
name: "Sally",
mystring: "a"
}
The long story short, Your application should be aware of the types of data that are coming in. If you allow user defined forms then your app needs to be able to attach a type to them. If you have a field that could be a string or a Date, then your app need to determine which type it is, and cast it, or otherwise store it correctly.
As it stands you will benefit from re-considering you use case, rather than waiting for something else to work all that out for you.

Looping through multiple Regex extractor output

Can you please tell me how to loop through the result of a Regex Post Processor that returns multiple values?
Example:
JSON Response message:
{
"reply": {
"code": "111",
"status": "SUCCESS",
"customerID": [
"222-a",
"b-333",
"44-4",
"s-555",
"666",
"777",
"88-8"
]
}
}
Regx extractor helped me extract each individual component of the array:
links_1=222-a
links_2=b-333
I can use some.url/${links_1}.
Here is exactly what I am trying to achieve, but this does not seem to work.
Can you please help me?
Loop through the Regex extracted individual variable using a counter and append each one in another HTTP request sampler:
WhileController(${__javaScript(${C} < ${links_matchNr})})
HTTPSampler use ${__V(links_${C})}
Counter (start=1,increment=1,maximum=${links_matchNr},referenceName=C)
Use ForEach Controller:
input variable : links
output variable : link for example
You can then use each value inside Controller through:
${link}
I have created this tutorial, http://goo.gl/4cBno I hope its useful. To view the desktop sharing clearly, click on the full screen icon at the bottom right.

Resources