Equivalent of GROUP BY in flux - flux

I want to know the latest battery levels in my sensors.
I was hoping that this would show me a table with one row for each (room, position)
from(bucket: "zigbee")
|> range(start: -1d, stop: now())
|> filter(fn: (r) => r["_measurement"] == "battery")
|> keep(columns: ["_time", "room", "position" "_value"])
|> aggregateWindow(every: 1d, fn: mean, createEmpty: false)
|> yield(name: "mean")
but it seems to be returning all rows over the last hours -- and there are multiple data points for each (room, position) that need aggregating out.
In SQL this would be easy.
SELECT MAX(_time), Room, Position, AVG(_value)
FROM Wherever
WHERE _time > DATEADD(DAY, -1, CURRENT_TIMESTAMP)
GROUP BY Room, Position
Is it even possible in flux? (It's not clear; it seems to be absolutely impossible to aggregage out the time dimension -- |> group(columns: ["room", "position"]) does nothing.) (If so, then: how?)

Related

Reset Scan Accumulator RxJS after certain time

I have a fromEvent attached to input keydown event. This way I can listen to KeyEvents.
Inside the pipe, I use the scan operator, so I can accumulate the latest 3 keys introduced by the user.
I check in the scan, the accumulator length, so if it's already three, I clean it up (manual reset).
I need a way that when the user types in, within the next 3000ms, he can keep typing until reaching the limit (3 keys) but if the user is slower than the time limit (3s), next time he types in, I will reset the accumulator manually.
fromEvent(myInput.nativeElement, 'keydown').pipe(
tap(e => e.preventDefault()),
scan((acc: KeyboardEvent[], val: KeyboardEvent) => {
// Add condition here to manually reset the accumulator...
if (acc.length === 3) {
acc = [];
}
return [...acc, val];
}, []),
takeUntil(this.destroy$)
).subscribe((events: KeyboardEvent[]) => console.log(events));
I have tried to merge this with a timer in some way, but I can't figure out how. Not sure how to get there.
you can use the timeInterval operator here, which gives you the time passed between emissions along with the value. set up along the lines of:
fromEvent(myInput.nativeElement, 'keydown').pipe(
tap(e => e.preventDefault()),
timeInterval(), // just add this operator, will convert to shape {value: T, interval: number}
scan((acc: KeyboardEvent[], val) => {
// Add condition here to manually reset the accumulator...
// also check the interval
if (acc.length === 3 || val.interval > 3000) {
acc = [];
}
return [...acc, val.value];
}, []),
takeUntil(this.destroy$)
).subscribe((events: KeyboardEvent[]) => console.log(events));
here is a working blitz: https://stackblitz.com/edit/rxjs-dxfb37?file=index.ts
not an operator I've ever had a use case for before, but seems to solve your issue here pretty effectively.

With Ecto, validate that a changeset with 2 different related models have the same parent model

In my app I have a method to create a new response. A response has a belongs_to relationship to both a player and match.
In addition player and match both have a belongs_to relationship to a team.
It looks like this:
When inserting a new response I want to validate that the player and match having the player_id and match_id foreign keys in the changeset belong to the same team.
Currently I'm achieving this as follows. First, define a custom validation that checks the records belonging to the foreign keys:
def validate_match_player(changeset) do
player_team =
Player
|> Repo.get(get_field(changeset, :player_id))
|> Map.get(:team_id)
match_team =
Match
|> Repo.get(get_field(changeset, :match_id))
|> Map.get(:team_id)
cond do
match_team == player_team -> changeset
true -> changeset |> add_error(:player, "does not belong to the same team as the match")
end
end
and the use the validation as part of the changeset:
def changeset(model, params \\ %{}) do
model
|> cast(params, [:player_id, :match_id, :message])
|> validate_required([:player_id, :match_id, :message])
|> foreign_key_constraint(:match_id)
|> foreign_key_constraint(:player_id)
|> validate_match_player()
|> unique_constraint(
:player,
name: :responses_player_id_match_id_unique,
message: "already has an response for this match"
)
end
This works fine but involves a couple of extra SQL queries to look up the related records in order to get their team_id foreign keys to compare them.
Is there a nicer way to do this, perhaps using constraints, that avoids the extra queries?
I have two possible improvements:
Application level solution: instead of two queries, you just query once.
Database level solution: you create a trigger for the check in the database.
Application Level Solution
Right now you have two queries for checking that player and match belong to the same team. That means two round trips to the database. You could reduce this by half if you use just one query e.g. given the following query:
SELECT COUNT(*)
FROM players AS p
INNER JOIN matches AS m
ON p.team_id = m.team_id
WHERE p.id = NEW.player_id AND m.id = NEW.match_id
you would change your function as follows:
def validate_match_player(changeset) do
player_id = get_field(changeset, :player_id)
match_id = get_field(changeset, :match_id)
[result] =
Player
|> join(:inner, [p], m in Match, on: p.team_id == m.team_id)
|> where([p, m], p.id == ^player_id and m.id == ^match_id)
|> select([p, m], %{count: count(p.id)})
|> Repo.all()
case result do
%{count: 0} ->
add_error(changeset, :player, "does not belong to the same team as the match")
_ ->
changeset
end
end
Database Level Solution
I'm assuming you're using PostgreSQL, so my answer will correspond to what you can find in the PostgreSQL manual.
There's no (clean) way to define a constraint in the table that does this. Constraints can only access the table where they're defined. Some constraints can only access the column from what they're defined and nothing more (CHECK CONSTRAINT).
The best approach would be writing a trigger for validating both fields e.g:
CREATE OR REPLACE FUNCTION trigger_validate_match_player()
RETURNS TRIGGER AS $$
IF (
SELECT COUNT(*)
FROM players AS p
INNER JOIN matches AS m
ON p.team_id = m.team_id
WHERE p.id = NEW.player_id AND m.id = NEW.match_id
) = 0
THEN
RAISE 'does not belong to the same team as the match'
USING ERRCODE 'invalid_match_player';
END IF;
RETURN NEW;
$$ LANGUAGE plpgsql;
CREATE TRIGGER responses_validate_match_player
BEFORE INSERT OR UPDATE ON responses
FOR EACH ROW
EXECUTE PROCEDURE trigger_validate_match_player();
The previous trigger will raise an exception when it fails. This also means Ecto will raise an exception. You can see how to handle this exception here.
In the end, maintaining triggers is not easy unless you're using something like sqitch for database migrations.
PS: If you're curious, the very dirty way of doing this in a CHECK constraint is by defining a PostgreSQL function that basically bypasses the limitation. I wouldn't recommend it.
I hope this helps :)

Optimizing neo4j query with multiple merges

What I need to do
For each query I need to find the user by device_id, or create a new node if doesn't exist. And for each, I need to update/create a few edges if rows contains certain properties. The load is massive(about 20k per second) and neo4j slows down. Each batch size is exactly 20k. Here is my query:
UNWIND {batch} as row
MERGE (m:User {device_id: row.device_id})
FOREACH (ignore IN CASE WHEN row.type IS NOT NULL THEN [1] ELSE [] END |
MERGE (e:Event {type: row.type})
MERGE (m) -[r:REL]-> (e)
SET r.count = ( CASE r.count WHEN NULL THEN 1 ELSE r.count + 1 END)
)
FOREACH (ignore IN CASE WHEN row.country IS NOT NULL THEN [1] ELSE [] END |
MERGE (c:Country {id: row.country})
MERGE (m) -[:Belongs]-> (c)
)
WITH m, ( CASE row.user_id WHEN NULL THEN m.user_id ELSE row.user_id END) AS user_id
SET m.user_id = user_id
I solved this issue by decreasing the batch size to 5k and running a few of those inside a transaction in parallel before committing.

Parametric LINQ query

This is another take on accessing dynamic objects in F# There I'm using let y = x.Where(fun x -> x.City ="London").Select("new(City,Zip)") to parametrize the query and extract the necessary items. These would correspond to columns in an SQL query, and be represented by a property of the datacontext. This is the part that I would like to pass in as a parameter.
type Northwind = ODataService<"http://services.odata.org/Northwind/Northwind.svc">
let db = Northwind.GetDataContext()
let query2 = query { for customer in db.Customers do
select customer} |> Seq.toArray
let qryfun (x:Northwind.ServiceTypes.Customer) =
query { for x in query2 do
select (x.City,x.CompanyName,x.Country)}
Basically I would like to pass in not only x but also x.*. As I'm accessing one database that is fixed, I can factor out x. However I now have 40 small functions extracting the different columns. Is it possible to factor it out to one function and pass the property as an argument? So sometimes I extractx.City but other times x.Country. I have tried using quotations however cannot splice it properly and maybe that is not the right approach.
Regarding quotation splicing, this works for me:
open System.Linq
type record = { x:int; y:string }
let mkQuery q =
query {
for x in [{x=1;y="test"}].AsQueryable() do
select ((%q) x)
}
mkQuery <# fun r -> r.x, r.y #>
|> Seq.iter (printfn "%A")

EBay OData Type Provider in F# and getting no results with LINQ

Can anyone help me understand why the piece of code below returns no results from the query yet the second sample does (though woe betide me if I try to use criteria on the second one!)
type EbayData =
ODataService<"http://ebayodata.cloudapp.net">
let Ebay = EbayData.GetDataContext()
let Favourites title number = query{
for deal in Ebay.Deals do
where (deal.Title.Contains(title))
take number
}
let Esearch title number = [for item in Favourites title number do
yield item]
The working version:
type Catalog = ODataService< "http://ebayodata.cloudapp.net/" >
let ebay = Catalog.GetDataContext()
let trial =
[ for item in ebay.Deals do
yield item]
I can't seem to output the first to any kind of list, no matter what do with |> etc. The second sample doesn't seem to bring back many resuls to do a text query on. However, my real issue is I can't seem to get anything out of the LINQ in F# version.
The output is used in a WPF application where I use VB to talk to the list. I have populated a non discriminated list of 10 items with it, so that end does work. This is the VB code.
For Each Deal In trial.Where(Function(p) p.Title.Contains(title.Text))
DealResults.Items.Add(buildStackPanel(Deal))
Next
The spacing for the F Sharp on this post doesn't seem to work when I hit Ctrl-K so if anyone can tell me what I'm doing wrong - I guess that's a second question!
I don't know why this is not working for you. I knocked out the following and it seems to work:
open Microsoft.FSharp.Data
type Catalog = TypeProviders.ODataService< "http://ebayodata.cloudapp.net/" >
let ebay = Catalog.GetDataContext()
let trial =
[ for item in ebay.Deals do
yield item]
let trial2 = query {
for deal in ebay.Deals do
where (deal.Title.Contains "a")
take 2
}
let ESearch title number =
query {
for deal in ebay.Deals do
where (deal.Title.Contains title)
take number
}
[<EntryPoint>]
let main argv =
trial |> Seq.take 2 |> Seq.iter (fun d -> printfn "%s" d.Title)
trial2 |> Seq.iter (fun d -> printfn "%s" d.Title)
ESearch "a" 2 |> Seq.iter (fun d -> printfn "%s" d.Title)
0
Maybe you tried searching for stuff that doesn't exist? At the moment there are only 6 deals, so this is not unlikely.
Querying Items
Read about the eBay OData service here: http://ebayodata.cloudapp.net/docs
It has special needs when querying for Items:
(search parameter or $filter with Seller, PrimaryCategoryId or
SecondaryCategoryId is required)
So to query Items, you'll need to provide at least a search phrase. Your where statement doesn't get translated to a search parameter in the final url. To add custom parameters in this Type Provider, you do .AddQueryOption.
let ItemSearch title number =
query {
for item in ebay.Items
.AddQueryOption("search", title) do
take number
}
// use
ItemSearch "wario" 2 |> Seq.iter (fun d -> printfn "%s" d.Title)

Resources