I'm trying to make a classified text, and I'm having problem turning
(class1 (subclass1) (subclass2 item1 item2))
To
(class1 (subclass1 item1) (subclass2 item1 item2))
I have no idea to turn text above to below one, without caching subclass1 in memory. I'm using Perl on Linux, so any solution using shell script or Perl is welcome.
Edit: I've tried using grep, saving whole subclass1 in a variable, then modify and exporting it to the list; but the list may get larger and that way will use a lot of memory.
I have no idea to turn text above to below one
The general approach:
Parse the text.
You appear to have lists of space-separated lists and atoms. If so, the result could look like the following:
{
type => 'list',
value => [
{
type => 'atom',
value => 'class1',
},
{
type => 'list',
value => [
{
type => 'atom',
value => 'subclass1',
},
]
},
{
type => 'list',
value => [
{
type => 'atom',
value => 'subclass2',
},
{
type => 'atom',
value => 'item1',
},
{
type => 'atom',
value => 'item2',
},
],
}
],
}
It's possible that something far simpler could be generated, but you were light on details about the format.
Extract the necessary information from the tree.
You were light on details about the data format, but it could be as simple as the following if the above data structure was created by the parser:
my $item = $tree->{value}[2]{value}[1]{value};
Perform the required modifications.
You were light on details about the data format, but it could be as simple as the following if the above data structure was created by the parser:
my $new_atom = { type => 'atom', value => $item };
push #{ $tree->{value}[1]{value} }, $new_atom;
Serialize the data structure.
For the above data structure, you could use the following:
sub serialize {
my ($node) = #_;
return $node->{type} eq 'list'
? "(".join(" ", map { serialize($_) } #{ $node->{value} }).")"
: $node->{value};
}
Other approaches could be available depending on the specifics.
Related
This is more a logical problem then a RxJS problem, I guess, but I do not get it how to solve it.
[input 1]
From a cities stream, I will receive 1 or 2 objects (cities1 or cities2 are test fixtures).
1 object if their is only one language available, 2 objects for a city with both languages.
[input 2]
I do also have a selectedLanguage ("fr" or "nl")
[algo]
If the language of the object corresponds the selectedLanguage, I will pluck the city. This works for my RxJS when I receive 2 objects (cities2)
But since I also can receive 1 object, the filter is not the right thing to do
[question]
Should I check the cities stream FIRST if only one object exists and add another object. Or what are better RxJS/logical options?
const cities1 = [
{city: "LEUVEN", language: "nl"}
];
const cities2 = [
{city: "BRUSSEL", language: "nl"},
{city: "BRUXELLES", language: "fr"}
];
const selectedLang = "fr"
const source$ = from(cities1);
const result = source$.pipe(
mergeMap((city) => {
return of(selectedLang).pipe(
map(lang => {
return {
lang: city.language,
city: city.city,
selectedLang: lang
}
}),
filter(a => a.lang === selectedLang),
pluck('city')
)
}
)
);
result.subscribe(console.log)
If selectedLang is not an observable (i.e. you don't want this to change) then I think it would make it way easier if you keep it as a value:
const result = source$.pipe(
filter(city => city.language === selectedLang)
map(city => city.city)
);
There's nothing wrong from using external parameters, and it makes the stream easier to read.
Now, if selectedLang is an observable, and you want result to always give the city with that selectedLang, then you probably need to combine both streams, while keeping all the cities received so far:
const selectedLang$ = of(selectedLang); // This is actually a stream that can change value
const cities$ = source$.pipe(
scan((acc, city) => [...acc, city], [])
);
const result = combineLatest([selectedLang$, cities$]).pipe(
map(([selectedLang, cities]) => cities.find(city => city.language == selectedLang)),
filter(found => Boolean(found))
map(city => city.city)
)
Edit: note that this result will emit every time cities$ or selectedLang$ changes and one of the cities matches. If you don't want repeats, you can use the distinctUntilChanged() operator - Probably this could be optimised using an exhaustMap or something, but it makes it harder to read IMO.
Thanks for your repsonse. It's great value for me. Indeed I will forget about the selectedLang$ and pass it like a regular string. Problem 1 solved
I'll explain a bit more in detail my question. My observable$ cities$ in fact is a GET and will always return 1 or 2 two rows.
leuven:
[ { city: 'LEUVEN', language: 'nl', selectedLanguage: 'fr' } ]
brussel:
[
{ city: 'BRUSSEL', language: 'nl', selectedLanguage: 'fr' },
{ city: 'BRUXELLES', language: 'fr', selectedLanguage: 'fr' }
]
In case it returns two rows I will be able to filter out the right value
filter(city => city.language === selectedLang) => BRUXELLES when selectedLangue is "fr"
But in case I only receive one row, I should always return this city.
What is the best solution to this without using if statements? I've been trying to work with object destruct and scaning the array but the result is always one record.
// HTTP get
const leuven: City[] = [ {city: "LEUVEN", language: "nl"} ];
// same HTTP get
const brussel: City[] = [ {city: "BRUSSEL", language: "nl"},
{city: "BRUXELLES", language: "fr"}
];
mapp(of(brussel), "fr").subscribe(console.log);
function mapp(cities$: Observable<City[]>, selectedLanguage: string): Observable<any> {
return cities$.pipe(
map(cities => {
return cities.map(city => { return {...city, "selectedLanguage": selectedLanguage }}
)
}),
// scan((acc, value) => [...acc, { ...value, selectedLanguage} ])
)
}
I have the following:
data_spec['data'] = "some.awesome.values"
data_path = ""
data_spec['data'].split('.').each do |level|
data_path = "#{data_path}['#{level}']"
end
data = "site.data#{data_path}"
At this point, data equals a string: "site.data['some']['awesome']['values']"
What I need help with is using the string to get the value of: site.data['some']['awesome']['values']
site.data has the following value:
{
"some" => {
"awesome" => {
"values" => [
{
"things" => "Stuff",
"stuff" => "Things",
},
{
"more_things" => "More Stuff",
"more_stuff" => "More Things",
}
]
}
}
}
Any help is greatly appreciated. Thanks!
You could do as tadman suggested and use site.data.dig('some', 'awesome', values') if you are using ruby 2.3.0 (which is awesome and I didn't even know existed). This is probably your best choice. But if you really want to write the code yourself read below.
You were on the right track, the best way to do this is:
data_spec['data'] = "some.awesome.values"
data = nil
data_spec['data'].split('.').each do |level|
if data.nil?
data = site.data[level]
else
data = data[level]
end
end
To understand why this works first you need to understand that site.data['some']['awesome']['values'] is the same as saying: first get some then inside that get awesome then inside that get values. So our first step is retrieving the some. Since we don't have that first level yet we get it from site.data and save it to a variable data. Once we have that we just get each level after that from data and save it to data, allowing us to get deeper and deeper into the hash.
So using your example data would initally look like this:
{"awesome" => {
"values" => [
{
"things" => "Stuff",
"stuff" => "Things",
},
{
"more_things" => "More Stuff",
"more_stuff" => "More Things",
}
]
}
}
Then this:
{"values" => [
{
"things" => "Stuff",
"stuff" => "Things",
},
{
"more_things" => "More Stuff",
"more_stuff" => "More Things",
}
]
}
and finally output like this:
[
{
"things" => "Stuff",
"stuff" => "Things",
},
{
"more_things" => "More Stuff",
"more_stuff" => "More Things",
}
]
If you're receiving a string like 'x.y.z' and need to navigate a nested hash, Ruby 2.3.0 includes the dig method:
spec = "some.awesome.values"
data = {
"some" => {
"awesome" => {
"values" => [
'a','b','c'
]
}
}
}
data.dig(*spec.split('.'))
# => ["a", "b", "c"]
If you don't have Ruby 2.3.0 and upgrading isn't an option you can just patch it in for now:
class Hash
def dig(*path)
path.inject(self) do |location, key|
location.respond_to?(:keys) ? location[key] : nil
end
end
end
I wrote something that does exactly this. Feel free to take any information of value from it or steal it! :)
https://github.com/keithrbennett/trick_bag/blob/master/lib/trick_bag/collections/collection_access.rb
Check out the unit tests to see how to use it:
https://github.com/keithrbennett/trick_bag/blob/master/spec/trick_bag/collections/collection_access_spec.rb
There's an accessor method that returns a lambda. Since lambdas can be called using the [] operator (method, really), you can get such a lambda and access arbitrary numbers of levels:
accessor['hostname.ip_addresses.0']
or, in your case:
require 'trick_bag'
accessor = TrickBag::CollectionsAccess.accessor(site.data)
do_something_with(accessor['some.awesome.values'])
What you are looking for is something generally looked down upon and for good reasons. But here you go - it's called eval:
binding.eval data
I am comparing two json content or objects in ruby+cucumber like this
but when i compare, it does not ignores the order of content if it varies. I know this statement compares as two strings. So is there anyway i can compare two json objects by ignoring its order sequence?
expect(#act_resp_excl_key).to eq(exp_data_excl_key)
Adding little more information with above details. i have two json document like below.
json1 = {
"entries" = > [{
"doingBusinessAsName" = > "KROGER FOODS",
"legalName" = > "Kroger-Corps"
}
]
}
json2 = {
"entries" = > [{
"legalName" = > "Kroger-Corps"
"doingBusinessAsName" = > "KROGER FOODS",
}
]
}
When i compare these two json in ruby+cucumber, i get the result as failure. But logically it is same and i should get pass. I use the above comparison statement to validate two jsons.
#tgf,
I used the statement which you specified, but still my comparison fails. could you please help me what could be the issue?
expect(JSON.parse(#act_resp_excl_key)).to eq JSON.parse(exp_data_excl_key)
If the data is simply JSON in a string (please post example of data) you can just parse it to a ruby hash and compare that.
require 'json'
JSON.parse(#act_resp_excl_key).class => Hash
Then assert the two hashes are equal:
expect(JSON.parse(#act_resp_excl_key)).to eq JSON.parse(exp_data_excl_key)
This works even if the order is different.
Your commas are not in the correct locations for json2.
> json1 = { "entries" => [{ "doingBusinessAsName" => "KROGER FOODS", "legalName" => "Kroger-Corps" } ] }
=> {"entries"=>[{"doingBusinessAsName"=>"KROGER FOODS", "legalName"=>"Kroger-Corps"}]}
> json2 = { "entries" => [{ "legalName" => "Kroger-Corps", "doingBusinessAsName" => "KROGER FOODS" } ] }
=> {"entries"=>[{"legalName"=>"Kroger-Corps", "doingBusinessAsName"=>"KROGER FOODS"}]}
> json1==json2
=> true
I've got a working Mongo query that I need to translate into Ruby:
var reducer = function(current, result){
result.loginsCount++;
result.lastLoginTs = Math.max(result.lastLoginTs, current.timeStamp);
}
var finalizer = function(result){
result.lastLoginDate = new Date(result.lastLoginTs).toISOString().split('T')[0];
}
db.audit_log.group({
key : {user : true},
cond : {events : { $elemMatch : { action : 'LOGIN_SUCCESS'}}},
initial : {lastLoginTs : -1, loginsCount : 0},
reduce : reducer,
finalize : finalizer
})
I'm hitting several sticking points getting this to work in Ruby. I'm not really all that familiar with Mongo, and I'm not sure what to pass as arguments to the method calls. This is my best guess, after connecting to the database and a collection called audit_log:
audit_log.group({
"key" => {"user" => "true"},
"cond" => {"events" => { "$elemMatch" => { "action" => "LOGIN_SUCCESS"}}},
"initial" => {"lastLoginTs" => -1, "loginsCount" => 0},
"reduce" => "function(current, result){result.loginsCount += 1}",
"finalize" => "function(result){ result.lastLoginDate = new Date(result.lastLoginTs).toISOString().split('T')[0]; }
})
Or something like that. I've tried using a simpler aggregate operation using the Mongo docs, but I couldn't get that working, either. I was only able to get really simple queries to return results. Are those keys (key, cond, initial, etc.) even necessary, or is that only for JavaScript?
This is how the function finally took shape using the 1.10.0 Mongo gem:
#db.collection("audit_log").group(
[:user, :events],
{'events' => { '$elemMatch' => { 'action' => 'LOGIN_SUCCESS' }}},
{ 'lastLoginTs' => -1, 'loginsCount' => 0 },
"function(current, result){ result.loginsCount++; result.lastLoginTs = Math.max(result.lastLoginTs, current.timeStamp);}",
"function(result){ result.lastLoginDate = new Date(result.lastLoginTs).toISOString().split('T')[0];}"
)
With the Mongo Driver, you leave off the keys: "key", "cond", "initial", "reduce", "finalize" and simply pass in the respective values.
I've linked to two approaches taken by other SO users here and here.
I'd like to loop through an array and create a hash for each object in the array, then group all those hashes into an array of hashes.
Here's an example starting array for me:
urls = ["http://stackoverflow.com", "http://example.com", "http://foobar.com"]
Now let's say I'd like to have a hash for each of those URLs into an array like this:
urls =[ {
'url' => "http://stackoverflow.com",
'dns_status' => "200",
'title' => "Stack Overflow"
},
{
'url' => "http://example.com",
'dns_status'=> "200",
'title' => "Example"
}
]
Leaving aside where I get the values for the dns_status and title keys in the example, I guess what I'm missing is how to loop through the original array and create a hash for each object...
I've played around with inject, collect, map and each and read through the docs but can't quite make sense of it or get anything to work.
Any recommendation? Will this be easier to accomplish with a class?
EDIT:
Thanks for your help everyone. Figured this out and got it working. Cheers!
Do something with each element of something enumerable and store the result in an array: that is what map does. Specify what you want in the block, like this:
urls = ["http://stackoverflow.com", "http://example.com", "http://foobar.com"]
p res = urls.map{|url| {"url"=>url, "dns_status"=>200, "title"=>url[7..-5]} }
#=> [{"url"=>"http://stackoverflow.com", "dns_status"=>200, "title"=>"stackoverflow"}, {"url"=>"http://example.com", "dns_status"=>200, "title"=>"example"}, {"url"=>"http://foobar.com", "dns_status"=>200, "title"=>"foobar"}]
"what I'm missing is how to loop through the original array and create a hash for each object..."
urls = [
"http://stackoverflow.com",
"http://example.com",
"http://foobar.com"
]
urls.each {|entry|
puts entry
}
You could use .map! for instance. But I am still not sure what your target result ought to be. How about this?
urls.map! {|entry|
{ 'url' => entry, 'dns_status' => "200", 'title' => "Stack Overflow"}
}
urls # => [{"url"=>"http://stackoverflow.com", "dns_status"=>"200", "title"=>"Stack Overflow"}, {"url"=>"http://example.com", "dns_status"=>"200", "title"=>"Stack Overflow"}, {"url"=>"http://foobar.com", "dns_status"=>"200", "title"=>"Stack Overflow"}]
Yikes, the result is hard to see. It is this:
[
{
"url"=>"http://stackoverflow.com",
"dns_status"=>"200",
"title"=>"Stack Overflow"
},
{
"url"=>"http://example.com",
"dns_status"=>"200",
"title"=>"Stack Overflow"
},
{
"url"=>"http://foobar.com",
"dns_status"=>"200",
"title"=>"Stack Overflow"
}
]
Obviously, you need to still supply the proper content for title,
but you did not give this in your original question so I could not
fill it in.