Relational Algebra: Select tuples based on whether an attribute is unique in a table - relational-algebra

Given a table:
T = {A1, A2, A3, A4}
How do you write a relational algebra statement that picks all tuples that have the same value for A3 as another tuple in the table?

You do a equijoin with T and itself on column A3.
T2←T,T⋈T.A3=T2.A3 T2
Now any tuple from T will be connected with all tuples that have the same value for A3. You can further select for a specific value of A3 from T and project to the attributes from T2.

Related

Why maxMerge() gives different results than max() of sumMerge()?

The whole point is to get peaks per period (e.g. 5m peaks) for value that accumulates. So it needs to be summed per period and then the peak (maximum) can be found in those sums. (select max(v) from (select sum(v) from t group by a1, a2))
I have a base table t.
Data are inserted into t, consider two attributes (time t1 and some string a2) and one numeric value.
Value accumulates so it needs to be summed to get the total volume over certain period. Example of rows inserted:
t1 | a2 | v
----------------
date1 | b | 1
date2 | c | 20
I'm using a MV to compute sumState() and from that I get peaks using sumMerge() and then max().
I need it only for max values so I was wondering I could use maxState() directly.
So this is what I do now: I use MV that computes a 5m sum and from that I read max()
CREATE TABLE IF NOT EXISTS sums_table ON CLUSTER '{cluster}' (
t1 DateTime,
a2 String,
v AggregateFunction(sum, UInt32)
)
ENGINE = ReplicatedAggregatingMergeTree(
'...',
'{replica}'
)
PARTITION BY toDate(t1)
ORDER BY (a2, t1)
PRIMARY KEY (a2);
CREATE MATERIALIZED VIEW IF NOT EXISTS mv_a
ON CLUSTER '{cluster}'
TO sums_table
AS
SELECT toStartOfFiveMinute(t1) AS t1, a2,
sumState(toUInt32(v)) AS v
FROM t
GROUP BY t1, a2
from that I'm able to read max of 5m sum for a2 using
SELECT
a2,
max(sum) AS max
FROM (
SELECT
t1,
a2,
sumMerge(v) AS sum
FROM sums_table
WHERE t1 BETWEEN :fromDateTime AND :toDateTime
GROUP BY t1, a2
)
GROUP BY a2
ORDER BY max DESC
That works perfectly.
So I wanted to achieve the same using maxState and maxMerge():
CREATE TABLE IF NOT EXISTS max_table ON CLUSTER '{cluster}' (
t1 DateTime,
a2 String,
max_v AggregateFunction(max, UInt32)
)
ENGINE = ReplicatedAggregatingMergeTree(
'...',
'{replica}'
)
PARTITION BY toDate(t1)
ORDER BY (a2, t1)
PRIMARY KEY (a2)
CREATE MATERIALIZED VIEW IF NOT EXISTS mv_b
ON CLUSTER '{cluster}'
TO max_table
AS
SELECT
t1,
a2
maxState(v) AS max_v
FROM (
SELECT
toStartOfFiveMinute(t1) AS t1,
a2,
toUInt32(sum(v)) AS v
FROM t
GROUP BY t1, a2
)
GROUP BY t1, a2
and I thought if I get a max per time (t1) and a2, and then select max of that per a2, I'd get the maximum value for each a2, but I'm getting totally different max values using this query compared to the max of sums mentioned above.
SELECT
a2,
max(max) AS max
FROM (
SELECT
t1,
a2,
maxMerge(v) AS max
FROM max_table
WHERE t1 BETWEEN :fromDateTime AND :toDateTime
GROUP BY t1, a2
) maxs_per_time_and_a2
GROUP BY a2
What did I do wrong? Do I get MVs wrong? Is it possible to use maxState with maxMerge for 2+ attributes to compute max over a longer period, let's say year?
SELECT
t1,
a2
maxState(v) AS max_v
FROM (
SELECT
toStartOfFiveMinute(t1) AS t1,
a2,
toUInt32(sum(v)) AS v
FROM t
GROUP BY t1, a2
)
GROUP BY t1, a2
This is incorrect. And impossible.
Because MV is an insert trigger. It never reads REAL table t.
You are getting max from sum of rows in insert buffer.
If you insert 1 row with v=10. You will get max_v = 10. MatView does not "know" that a previous insert has added some rows, their sum is not taken into account.

SQL UNION Optimization

I have 4 tables named A1, A2, B1, B2.
To fulfill a requirement, I have two ways to write SQL queries. The first one is:
(A1 UNION ALL A2) A JOIN (B1 UNION ALL B2) B ON A.id = B.a_id WHERE ...
And the second one is:
(A1 JOIN B1 on A1.id = B1.a_id WHERE ...) UNION ALL (A2 JOIN B2 on A2.id = B2.a_id WHERE ... )
I tried both approaches and realized they both give the same execution time and query plans in some specific cases. But I'm unsure whether they will always give the same performance or not.
So my question is when the first/second one is better in terms of performance?
In terms of coding, I prefer the first one because I can create two views on (A1 UNION ALL A2) as well as (B1 UNION ALL B2) and treat them like two tables.
The second one is better:
(A1 JOIN B1 on A1.id = B1.a_id WHERE ...) UNION ALL (A2 JOIN B2 on A2.id = B2.a_id WHERE ... )
It gives more information to Oracle CBO optimizer about how your tables are related to each other. CBO can calculate potentials plans' costs more precisely. It's all about cardinality, column statistics, etc.
Purely functionally, and without knowing what's in the tables,the first seems better - if data matches in a1 and b2, your 2nd query won't join it.

Relational Algebra: Natural Join having the same result as Cartesian product

I am trying to understand what will be the result of performing a natural join
between two relations R and S, where they have no common attributes.
By following the below definition, I thought the answer might be an empty set:
Natural Join definition.
My line of thought was because the condition in the 'Select' symbol is not met, the projection of all of the attributes won't take place.
When I asked my lecturer about this, he said that the output will be the same as doing a cartezian product between R and S.
I can't seem to understand why, would appreciate any help )
Natural join combines a cross product and a selection into one
operation. It performs a selection forcing equality on those
attributes that appear in both relation schemes. Duplicates are
removed as in all relation operations.
There are two special cases:
• If the two relations have no attributes in common, then their
natural join is simply their cross product.
• If the two relations have more than one attribute in common,
then the natural join selects only the rows where all pairs of
matching attributes match.
Notation: r s
Let r and s be relation instances on schema R and S
respectively.
The result is a relation on schema R ∪ S which is
obtained by considering each pair of tuples tr from r and ts from s.
If tr and ts have the same value on each of the attributes in R ∩ S, a
tuple t is added to the result, where
– t has the same value as tr on r
– t has the same value as ts on s
Example:
R = (A, B, C, D)
S = (E, B, D)
Result schema = (A, B, C, D, E)
r s is defined as:
πr.A, r.B, r.C, r.D, s.E (σr.B = s.B r.D = s.D (r x s))
The definition of the natural join you linked is:
It can be broken as:
1.First take the cartezian product.
2.Then select only those row so that attributes of the same name have the same value
3.Now apply projection so that all attributes have distinct names.
If the two tables have no attributes with same name, we will jump to step 3 and therefore the result will indeed be cartezian product.

LINQ query help - many-to-many related

In my database, I have a user table and a workgroup table, and a many-to-many relationship. A user can belong to one or more workgroups. I am using entity framework for my ORM (EF 4.1 Code First).
User Table has users:
1,2,3,4,5,6,7,8,9,10
Workgroup table has workgroups:
A,B,C, D
WorkgroupUser table has entries
A1, A2, A3, A4, A5
B1, B3, B5, B7, B9
C2, C4, C6, C8, C10
D1, D2, D3, D9, D10
What I would like to do is:
Given user 4, it belongs to workgroups A,C
and has common users
1,2,3,4,5 (from workgroup A) and
2,4,6,8,10 from workgroup C
and the distinct set of users in common is 1,2,3,4,5,6,8,10
How do I write a LINQ statement (preferably in fluent API) for this?
Thank you,
Here's the general idea (since I don't know the properties of User and WorkGroup entity)
var result = context.users.Where(u => u.ID == 4)
.SelectMany(u => u.WorkGroups.SelectMany(wg => wg.Users))
.Distinct()

Algorithms to create a tabular representation of a DAG?

Given a DAG, in which each node belongs to a category, how can this graph be transformed into a table with a column for each category? The transformation doesn't have to be reversible, but should preserve useful information about the structure of the graph; and should be a 'natural' transformation, in the sense that a person looking at the graph and the table should not be surprised by any of the rows. It should also be compact, i.e. have few rows.
For example given a graph of nodes a1,b1,b2,c1 with edges a1->b1, a1->b2, b1->c1, b2->c1 (i.e. a diamond-shaped graph) I would expect to see the following table:
a b c
--------
a1 b1 c1
a1 b2 c1
I've thought about this problem quite a bit, but I'm having trouble coming up with an algorithm that gives intuitive results on certain graphs. Consider the graph a1,b1,c1 with edges a1->c1, b1->c1. I'd like the algorithm to produce this table:
a b c
--------
a1 b1 c1
But maybe it should produce this instead:
a b c
--------
a1 c1
a1 b1
I'm looking for creative ideas and insights into the problem. Feel free to vary to simplify or constrain the problem if you think it will help.
Brainstorm away!
Edit:
The transformation should always produce the same set of rows, although the order of rows does not matter.
The table should behave nicely when sorting and filtering using, e.g., Excel. This means that mutliple nodes cannot be packed into a single cell of the table - only one node per cell.
What you need is a variation of topological sorting. This is an algorithm that "sorts" graph vertexes as if a---->b edge meant a > b. Since the graph is a DAG, there is no cycles in it and this > relation is transitive, so at least one sorting order exists.
For your diamond-shaped graph two topological orders exist:
a1 b1 b2 c1
a1 b2 b1 c1
b1 and b2 items are not connected, even indirectly, therefore, they may be placed in any order.
After you sorted the graph, you know an approximation of order. My proposal is to fill the table in a straightforward way (1 vertex per line) and then "compact" the table. Perform sorting and pick the sequence you got as output. Fill the table from top to bottom, assigning a vertex to relevant column:
a b c
--------
a1
b2
b1
c1
Now compact the table by walking from top to bottom (and then make similar pass from bottom to top). On each iteration, you take a closer look to a "current" row (marked as =>) and to the "next" row.
If in a column nodes in current and next node differ, do nothing for this column:
from ----> to
X b c X b c
-------- --------
=> X1 . . X1 . .
X2 . . => X2 . .
If in a column X in the next row there is no vertex (table cell is empty) and in the current row there is vertex X1, then you sometimes should fill this empty cell with a vertex in the current row. But not always: you want your table to be logical, don't you? So copy the vertex if and only if there's no edge b--->X1, c--->X1, etc, for all vertexes in current row.
from ---> to
X b c X b c
-------- --------
=> X1 b c X1 b c
b1 c1 => X1 b1 c1
(Edit:) After first (forward) and second (backward) passes, you'll have such tables:
first second
a b c a b c
-------- --------
a1 a1 b2 c1
a1 b2 a1 b2 c1
a1 b1 a1 b1 c1
a1 b1 c1 a1 b1 c1
Then, just remove equal rows and you're done:
a b c
--------
a1 b2 c1
a1 b1 c1
And you should get a nice table. O(n^2).
How about compacting all reachable nodes from one node together in one cell ? For example, your first DAG should look like:
a b c
---------------
a1 [b1,b2]
b1 c1
b2 c1
It sounds like a train system map with stations within zones (a,b,c).
You could be generating a table of all possible routes in one direction. In which case "a1, b1, c1" would seem to imply a1->b1 so don't format it like that if you have only a1->c1, b1->c1
You could decide to produce a table by listing the longest routes starting in zone a,
using each edge only once, ending with the short leftover routes. Or allow edges to be reused only if they connect unused edges or extend a route.
In other words, do a depth first search, trying not to reuse edges (reject any path that doesn't include unused edges, and optionally trim used edges at the endpoints).
Here's what I ended up doing:
Find all paths emanating from a node without in-edges. (Could be expensive for some graphs, but works for mine)
Traverse each path to collect a row of values
Compact the rows
Compacting the rows is dones as follows.
For each pair of columns x,y
Construct a map of every value of x to it's possible values of y
Create another map For entries that only have one distinct value of y, mapping the value of x to its single value of y.
Fill in the blanks using these maps. When filling in a value, check for related blanks that can be filled.
This gives a very compact output and seems to meet all my requirements.

Resources