clickhouse sum arrays at same index [duplicate] - clickhouse

I am trying to add an array column element by element after a group by another column.
Having the table A below:
id units
1 [1,1,1]
2 [3,0,0]
1 [5,3,7]
3 [2,5,2]
2 [3,2,6]
I would like to query something like:
select id, sum(units) from A group by id
And get the following result:
id units
1 [6,4,8]
2 [6,2,6]
3 [2,5,2]
Where the units arrays in rows with the same id get added element by element.

Try this query:
SELECT id, sumForEach(units) units
FROM (
/* emulate dataset */
SELECT data.1 id, data.2 units
FROM (
SELECT arrayJoin([(1, [1,1,1]), (2, [3,0,0]), (1, [5,3,7]), (3, [2,5,2]), (2, [3,2,6])]) data))
GROUP BY id
/* Result
┌─id─┬─units───┐
│ 1 │ [6,4,8] │
│ 2 │ [6,2,6] │
│ 3 │ [2,5,2] │
└────┴─────────┘
*/

Related

how to use array join in Clickhouse

I'm trying to split 2 arrays using arrayJoin()
my table:
create table test_array(
col1 Array(INT),
col2 Array(INT),
col3 String
)
engine = TinyLog;
then i insert these values:
insert into test_array values ([1,2],[11,22],'Text');
insert into test_array values ([5,6],[55,66],'Text');
when i split the first array in col1 the result will be like this :
but what i need is to split col1 and col2 and add them in the select.
I tried this query but it didn't work.
select arrayJoin(col1) ,arrayJoin(col2) ,col1 , col2 ,col3 from test_array;
how can i edit the query to remove the highlighted rows in the picture?
Thanks.
The serial calls of arrayJoin produces the cartesian product, to avoid it use ARRAY JOIN:
SELECT
c1,
c2,
col1,
col2,
col3
FROM test_array
ARRAY JOIN
col1 AS c1,
col2 AS c2
/*
┌─c1─┬─c2─┬─col1──┬─col2────┬─col3─┐
│ 1 │ 11 │ [1,2] │ [11,22] │ Text │
│ 2 │ 22 │ [1,2] │ [11,22] │ Text │
│ 5 │ 55 │ [5,6] │ [55,66] │ Text │
│ 6 │ 66 │ [5,6] │ [55,66] │ Text │
└────┴────┴───────┴─────────┴──────┘
*/
one more way -- tuple()
SELECT
untuple(arrayJoin(arrayZip(col1, col2))),
col3
FROM test_array
┌─_ut_1─┬─_ut_2─┬─col3─┐
│ 1 │ 11 │ Text │
│ 2 │ 22 │ Text │
│ 5 │ 55 │ Text │
│ 6 │ 66 │ Text │
└───────┴───────┴──────┘

Group by date with sparkline like data in the one query

I have the time-series data from the similar hosts that stored in ClickHouse table in the next structure:
event_type | event_day
------------|---------------------
type_1 | 2017-11-09 20:11:28
type_1 | 2017-11-09 20:11:25
type_2 | 2017-11-09 20:11:23
type_2 | 2017-11-09 20:11:21
Each row in the table means the presence of a value 1 for event_type on the datetime. To quickly assess the situation I need to indicate the sum (total) + the last seven values (pulse), like this:
event_type | day | total | pulse
------------|------------|-------|-----------------------------
type_1 | 2017-11-09 | 876 | 12,9,23,67,5,34,10
type_2 | 2017-11-09 | 11865 | 267,120,234,425,102,230,150
I tried to get it with one request in the following way, but it failed - the pulse consists of the same values:
with
arrayMap(x -> today() - 7 + x, range(7)) as week_range,
arrayMap(x -> count(event_type), week_range) as pulse
select
event_type,
toDate(event_date) as day,
count() as total,
pulse
from database.table
group by day, event_type
event_type | day | total | pulse
------------|------------|-------|-------------------------------------------
type_1 | 2017-11-09 | 876 | 876,876,876,876,876,876,876
type_2 | 2017-11-09 | 11865 | 11865,11865,11865,11865,11865,11865,11865
Please point out where is my mistake and how to get desired?
select event_type, groupArray(1)(day)[1], arraySum(pulse) total7, groupArray(7)(cnt) pulse
from (
select
event_type,
toDate(event_date) as day,
count() as cnt
from database.table
where day >= today()-30
group by event_type,day
order by event_type,day desc
)
group by event_type
order by event_type
I would consider calculating pulse on the server-side, CH just provides the required data.
Can be used neighbor-window function:
SELECT
number,
[neighbor(number, -7), neighbor(number, -6), neighbor(number, -5), neighbor(number, -4), neighbor(number, -3), neighbor(number, -2), neighbor(number, -1)] AS pulse
FROM
(
SELECT number
FROM numbers(10, 15)
ORDER BY number ASC
)
┌─number─┬─pulse──────────────────┐
│ 10 │ [0,0,0,0,0,0,0] │
│ 11 │ [0,0,0,0,0,0,10] │
│ 12 │ [0,0,0,0,0,10,11] │
│ 13 │ [0,0,0,0,10,11,12] │
│ 14 │ [0,0,0,10,11,12,13] │
│ 15 │ [0,0,10,11,12,13,14] │
│ 16 │ [0,10,11,12,13,14,15] │
│ 17 │ [10,11,12,13,14,15,16] │
│ 18 │ [11,12,13,14,15,16,17] │
│ 19 │ [12,13,14,15,16,17,18] │
│ 20 │ [13,14,15,16,17,18,19] │
│ 21 │ [14,15,16,17,18,19,20] │
│ 22 │ [15,16,17,18,19,20,21] │
│ 23 │ [16,17,18,19,20,21,22] │
│ 24 │ [17,18,19,20,21,22,23] │
└────────┴────────────────────────┘

how can I calculated point of each user per day with sum all the points from beginning to that day in clickhouse

I have this data in clickhouse:
final point of each user in day is sum(point) from the beginning to that day.
e.g: point of user 1 in 2020-07-02 is 800 and in 2020-07-03 is 200.
I need this result: Point of each user per day:
select uid, d, t from (
select uid, groupArray(date) dg, arrayCumSum(groupArray(spt)) gt from
(select uid, date, sum(pt) spt from
(select 1 tid, '2020-07-01' date, 1 uid, 500 pt
union all
select 1 tid, '2020-07-02' date, 1 uid, 300 pt
union all
select 1 tid, '2020-07-03' date, 1 uid, -600 pt)
group by uid, date
order by uid, date)
group by uid) array join dg as d, gt as t
┌─uid─┬─d──────────┬───t─┐
│ 1 │ 2020-07-01 │ 500 │
│ 1 │ 2020-07-02 │ 800 │
│ 1 │ 2020-07-03 │ 200 │
└─────┴────────────┴─────┘

How to sum arrays element by element after group by in clickhouse

I am trying to add an array column element by element after a group by another column.
Having the table A below:
id units
1 [1,1,1]
2 [3,0,0]
1 [5,3,7]
3 [2,5,2]
2 [3,2,6]
I would like to query something like:
select id, sum(units) from A group by id
And get the following result:
id units
1 [6,4,8]
2 [6,2,6]
3 [2,5,2]
Where the units arrays in rows with the same id get added element by element.
Try this query:
SELECT id, sumForEach(units) units
FROM (
/* emulate dataset */
SELECT data.1 id, data.2 units
FROM (
SELECT arrayJoin([(1, [1,1,1]), (2, [3,0,0]), (1, [5,3,7]), (3, [2,5,2]), (2, [3,2,6])]) data))
GROUP BY id
/* Result
┌─id─┬─units───┐
│ 1 │ [6,4,8] │
│ 2 │ [6,2,6] │
│ 3 │ [2,5,2] │
└────┴─────────┘
*/

How to pivot subgroups?

We have created a flat table for Clickhouse and are trying to get records from this table to create a Materialized view. The logic is if e_id is null the record is 'TypeB', if e_id is not null then record is 'TypeA'. Both TypeA and TypeB records will have the same p_id and s_id. We want to create one record per p_id+s_id combination.
The query given below works well with filter (p_id =1 and s_id = 1) but without filters - the exception is "DB::Exception: Scalar subquery returned more than one row"
Is it possible to do this in ClickHouse?
Would it be possible to create Materialized View with such a query?
select p_id,s_id,
groupArray(e_id),
groupArray(name),
(select groupArray(name)
from flat_table
where e_id is null and p_id =1 and s_id = 1
group by p_id,s_id) as typeB
from flat_table
where e_id is not null and p_id =1 and s_id = 1
group by p_id,s_id;
/*
This what the table looks like:
Flat_table
p_id s_id e_id name
1 1 1 Jake
1 1 2 Bob
1 1 null Barby
1 1 null Ella
This is expected result:
p_id s_id e_id typeA typeB
1 1 [1,2] [Jake,Bob] [Barby,Ella]
*/
Let's try this query:
SELECT p_id, s_id, e_ids, typeA, typeB
FROM (
SELECT
p_id,
s_id,
groupArray((e_id, name)) eid_names,
arrayMap(x -> x.1, arrayFilter(x -> not isNull(x.1), eid_names)) e_ids,
arrayMap(x -> x.2, arrayFilter(x -> not isNull(x.1), eid_names)) typeA,
arrayMap(x -> x.2, arrayFilter(x -> isNull(x.1), eid_names)) typeB
FROM test.test_006
GROUP BY p_id, s_id)
/* Result
┌─p_id─┬─s_id─┬─e_ids─┬─typeA────────────┬─typeB──────────────┐
│ 2 │ 2 │ [1,2] │ ['Jake2','Bob2'] │ ['Barby2','Ella2'] │
│ 1 │ 1 │ [1,2] │ ['Jake','Bob'] │ ['Barby','Ella'] │
└──────┴──────┴───────┴──────────────────┴────────────────────┘
*/
/* Data preparation queries */
CREATE TABLE test.test_006
(
`p_id` Int32,
`s_id` Int32,
`e_id` Nullable(Int32),
`name` String
)
ENGINE = Memory
INSERT INTO test.test_006
VALUES (1, 1, 1, 'Jake'), (1, 1, 2, 'Bob'), (1, 1, null, 'Barby'), (1, 1, null, 'Ella'),
(2, 2, 1, 'Jake2'), (2, 2, 2, 'Bob2'), (2, 2, null, 'Barby2'), (2, 2, null, 'Ella2')

Resources