Rethinkdb execute multiple avg in one query - rethinkdb

I have a review table with multiple number columns. I would like to count he avg of all columns in one query.
So if the table looks like:
{
foo : 2,
bar : 5,
foobar : 10
},
{
foo : 4,
bar : 3,
foobar : 12
}
then i would like to get the avg for each column in one query. I know I can do:
r.table('stats' ).avg( 'foo' )
on each column but I would like to do this in just one query and map into into just one object.
Any ideas on how to do this?

You can use map with reduce (if every record in table has all 3 fields):
r.table("stats").map(function(row){
return {foo : row("foo"), bar : row("bar") , foobar : row("foobar"), count : 1};
}).reduce(function(left, right){
return {foo : left("foo").add(right("foo")), bar : left("bar").add(right("bar")), foobar : left("foobar").add(right("foobar")), count : left("count").add(right("count"))};
}).do(function (res) {
return {
foo: res('foo').div(res("count")),
bar: res('bar').div(res("count")),
foobar: res('foobar').div(res("count"))
};
})
If record can have not all fields, you can separate count in map operation for each field, and then in do use it depending on field.

Related

amount of character which are between epiosde 1 and 2 with the name Rick

I have a problem. I am using the https://rickandmortyapi.com/graphql API for graphql. I want to query the amount of character which are between epiosde 1 and 2 with the name Rick. Unfortunately my query is wrong. How could I get the desired output. I want to use the _" ..."Meta to get the meta data.
query {
characters(filter: {name: "Rick"}) {
_episode(filter: {id : 1
id: 2}) {
count
}
}
}

Search and extract element located in various path of json structure

I have a json in a PostgreSQL database and I need to extract an array not always located in same place.
Problem
Need to extract array choicies of a particular element name
Element name is known, but not where he's sitting in structure
Rules
All elements name are unique
choicies attribute could not be present
JSON structure
pages : [
{
name : 'page1',
elements : [
{ name : 'element1', choicies : [...]},
{ name : 'element2', choicies : [...]}
]
}, {
name : 'page2',
elements : [
{
name : 'element3',
templateElements : [
{
name : 'element4'
choicies : [...]
}, {
name : 'element5'
choicies : [...]
}
]
}, {
name : 'element6'
choicies : [...]
}
]
},{
name : 'element7',
templateElements : [
{
name : 'element8'
choicies : [...]
}
]
}
]
My try to extract elements by flatten the structure
SELECT pages::jsonb->>'name',
pageElements::jsonb ->> 'name',
pageElements::jsonb -> 'choicies',
pages.*
FROM myTable as myt,
jsonb_array_elements(myt.json -> 'pages') as pages,
jsonb_array_elements(pages -> 'elements') as pageElements
Alas column choicies is always null in my results. And that will not work when element is located somewhere else, like
page.elements.templateElements
page.templateElements
... and so on
I don't know if there is a way to search for a key (name) wherever it's sitting in json structure and extract an other key (choicies).
I wish to call a select with element name in parameter return choicies of this element.
By instance, if I call select with element name (element1 or element4 or element8), choicies array (as rows or json or text, no preference here) of this element should be return.
Wow! Solution founded goes belong expectation! JSONPath was the way to go
Amazing what we can do with this.
SQL
-- Use jsonpath to search, filter and return what's needed
SELECT jsonb_path_query(
myt.jsonb,
'$.** ? (#.name == "element_name_to_look_at")'
)->'choices' as jsonbChoices
FROM myTable as myt
Explanation of jsonpath in SQL
jsonb_path_query(jsonb_data, '$.** ? (#.name == "element_name_to_look_at")')->'choices'
jsonb_path_query : posgresql jsonpath function
jsonb_data : database column with jsonb data or jsonb expression
$.** : search everywhere from root element
? : where clause / filter
# : object return by search
#.name == "element_name_to_look_at" : every object name equals element_name_to_look_at
->'choices' : for each object returned by jsonpath, get choices attribute
Final version
After get choices jsonb array, we return a dataset with every choice.
choices arrays look like this :
[{value:'code1',text:'Code Label 1'}, {value:'code2',text:'Code Label 2'},...]
SELECT choices.*
FROM (
-- Use jsonpath to search, filter and return what's needed
SELECT jsonb_path_query(myt.jsonb, '$.** ? (#.name == "element_name_to_look_at")')->'choices' as jsonbChoices
FROM myTable as myt
) choice,
-- Explode json return array into columns
jsonb_to_recordset(choice.jsonbChoices) as choices(value text, text text);

lua table.sort not behaving as expected

I have a program that evaluates sets of images and assigns them a given value, now I would like to sort the output of this program, to do this I have the following code:
function SelectTop(params,images,count)
local values={}
for k,v in pairs(images) do
local noError,res=pcall(evaluate,params,v)
if noError then
values[v]=res
else
values[v] = 9999999999999999999999999999999999999999999999999999999999
end
end
function compare(a,b)
return a[2] < b[2]
end
table.sort(values,compare)
print(values)
end
where we can reasonably assume the output of evaluate to be akin to math.random(7000) (the actual code is far more complex and involves neural networks).
Now I would expect the output to be sorted but instead I get something like this:
{
table: 0x40299d30 : 4512.3590053809
table: 0x40299580 : 4029.3450116073
table: 0x40298dd0 : 6003.9508240314
table: 0x40297de0 : 6959.9145312802
table: 0x40297630 : 4265.2784117677
table: 0x40296e40 : 3850.0829011681
table: 0x40296690 : 4007.2308907069
table: 0x40296ec0 : 3840.5216952082
table: 0x4029a770 : 5059.1475464564
table: 0x40299fc0 : 6058.9603651599
table: 0x40299810 : 1e+58
table: 0x40299060 : 1e+58
table: 0x402988b0 : 5887.729117754
table: 0x402978c0 : 3675.7295252455
table: 0x40296920 : 1e+58
table: 0x4029aa00 : 5624.6042279879
table: 0x40295bf8 : 1391.8185365923
table: 0x40296458 : 4276.09869066
table: 0x40299aa0 : 1e+58
table: 0x402992f0 : 6334.3641972965
table: 0x40298300 : 2660.5004512843
table: 0x40298b40 : 6200.373787482
table: 0x40296148 : 6178.926312832
table: 0x40298380 : 1559.5307868896
table: 0x40295968 : 1e+58
table: 0x40296bb0 : 6708.7545218628
table: 0x4029b550 : 1484.2931717456
table: 0x40298400 : 1638.1286256175
table: 0x40298070 : 3762.7368939272
table: 0x402963d8 : 1500.002116023
table: 0x4029ac90 : 2486.2695974502
table: 0x40295e88 : 1e+58
table: 0x40297b50 : 4806.6468870717
table: 0x4029a4e0 : 4328.0636461426
table: 0x402973a0 : 4757.4343171052
table: 0x4029a250 : 3998.8649821268
}
So why does table.sort not work here? I would assume that some sort of sorting would happen here?
Anybody know what I'm doing wrong?
So if we want a full example we can do something like this:
function evaluate (a,b)
return math.random(7000)
end
SelectTop(nil,{ {a, b, c}, {d, e, f}, {g, e, f}, {f, e, f} },0)
output:
{ table: 0x41c2af18 : 5560
table: 0x41c2afa8 : 4131
table: 0x41c2af60 : 4892
table: 0x41c2aff0 : 5273
}
table.sort works on arrays, not on dictionaries.
You'll need to replace values[v]=res with something like values[#values+1]= {v, res} and adjust compare accordingly.
Right now table.sort will see empty array - there's no items idx 1/2/3/..., because you're indexing results with image itself.

Group list of objects/AR relation by user_id

I have a list of objects which is actually AR Relation. My object has these fields :
{
agreement_id: 1,
app_user_id: 1,
agency_name: 'Small business 1'
..etc..
},
{
agreement_id: 2,
app_user_id: 1,
agency_name: 'Small business 2'
..etc..
}
I m representing my object as a Hash for easier understanding. I need to map my list of objects to format like this :
{
1 => [1,2]
}
This represents a list of agreement_ids grouped by the user. I always know which user I m grouping on. Here is what I've tried so far :
where(app_user_id: user_id).where('...').select('app_user_id, agreement_id').group_by(&:app_user_id)
This gives me the structure what I want but not exactly the data that I want, here is an output of this :
{1=>
[#<Agreement:0x6340fdbb agreement_id: 1, app_user_id: 1>,
#<Agreement:0x91bd4dd agreement_id: 2, app_user_id: 1>]
}
I've also thought I was going to be able to do this with map method, and here is what I tried :
where(app_user_id: user_id).where('....').select('app_user_id, agreement_id').map do |ag|
{ ag.app_user_id => ag.agreement_id }
end.reduce(&:merge)
But it only produces the mapping with the last agreement_id like this :
{1=>2}
I've tried some other things not worth mentioning. Can anyone suggest a way that would make this work?
This might work :
where(app_user_id: user_id)
.where('...')
.select('app_user_id, agreement_id')
.group_by(&:app_user_id).map{|k,v| Hash[k, v.map(&:agreement_id)]}
Try this one
where(app_user_id: user_id).
where('...').
select('app_user_id, agreement_id').
map { |a| [a.app_user_id, a.agreement_id] }.
group_by(&:first)

Faster query by value

I want to query MongoDB to find, in the results top level document, how many nested documents of it have value 0.
For instance, in this collection:
{name: "mary", results: {"foo" : 0, "bar" : 8}}
{name: "bob", results: {"baz" : 9, "qux" : 0}}
{name: "leia", results: {"foo" : 9, "norf" : 5}}
my query should return 2, because two of the documents have 0 as a value of a nested document of results.
Here's my attempt
db.collection.find({$where : function() {
for (var key in this.results) {
if (this.results[key] === 0) { return true;} } return false; } })
which works on the above dataset, but is too slow. My real data are 100k documents, each having 500 nested documents inside results, and the above query takes a few minutes. Is it possible to design this query in a faster way?
There is no way to do it, other than the one you are doing.
You can only change the schema or use aggregations but I don't think that this is what you want.
There is a post about it you can check here:
mongoDB: find by embedded value

Resources