Complex Networks in Hive - Optimization Code - hadoop

I have a problem with how to get my Hive code optimized.
I have a huge table as follows:
Customer_id Product_id Date Value
1 1 02/28 100.0
1 2 02/02 120.0
1 3 02/10 144.0
2 2 02/15 120.0
2 3 02/28 144.0
... ... ... ...
I want to create a complex network where I link the products through the buyers. The graph does not have to be directed and I have to count the number of links between them.
In the end I need this:
Product_x Product_y amount
1 2 1
1 3 1
2 3 2
Can anyone help me with this?
I need an optimized way to do this. The join of the table with itself is not the solution. I really need an optimum way on this =/
CREATE TABLE X AS
SELECT
a.product_id as product_x,
b.product_id as product_y,
count(*) as amout
FROM table as a
JOIN table as b
ON a.customer_id = b.customer_id
WHERE a.product_id < b.product_id
GROUP BY product_x, product_y;

Related

Laravel sum/count nested relation

i have a problem with simple relations.
Tables looks like that:
table players
id user_id game_id
1 2 1
2 5 1
3 3 1
4 4 2
5 2 2
table games
id result (win or lose)
1 1
2 0
What i need in result is:
Players Wins Losses
John 3 2
Philip 2 2
Jack 1 3
I tried alot of queries but i cant get proper result.
This one
`"select * from `players` inner join `games` on `players`.`game_id` = `games`.`id`"`
This one is best i can do, but its raw and no idea how to rewrite it to DB:: or Eloquent. And its not grouping anyway.
you can directly count from relation with model.
$category = Category::find($id);
$category->children->count();

Distinct on two columns with same data type

In my game application I have a combats table:
id player_one_id player_two_id
---- --------------- ---------------
1 1 2
2 1 3
3 3 4
4 4 1
Now I need to know hoy many unique users played the game. How can I apply distinct, count on both columns player_one_id and player_two_id?
Many thanks.
By using union you can get unique distinct value.
$playerone = DB::table("combats")
->select("combats.player_one_id");
$playertwo = DB::table("combats")
->select("combats.player_two_id")
->union($playerone)
->count();

Simulate pipelined order by in oracle 11g

I have been working with an application that is integrated with spring and Hibernate 4.X.X and its transaction is managed by JTA in Weblogic application server. After 3 years, there are about 40 million records only into one table from 100 tables that exist in my DB. The DB is Oracle 11g. The response time of a query is about 5 minutes because of increasing the count of records of this tables.
I customized the query and put it into Sql Developer and run the query advisor plan for suggestion some Index. Totally after doing such this, its response time is reduced to 2 minute. But even so, this response time does not satisfy the Custumer. To further clarify I put the query, It is as following:
select *
from (select (count(storehouse0_.ID) over()) as col_0_0_,
storehouse3_.storeHouse_ID as col_1_0_,
(DBPK_PUB_STOREHOUSE.get_Storehouse_Title(storehouse5_.id, 1)) as col_2_0_,
storehouse5_.Organization_Code as col_3_0_,
publicgood1_.Goods_Item_Id as col_4_0_,
storehouse0_.storeHouse_Inventory_Id as col_5_0_,
storehouse0_.Id as col_6_0_,
storehouse3_.samapel_Item_Id as col_7_0_,
samapelite10_.MAINNAME as col_8_0_,
publicgood1_.serial_Number as col_9_0_,
publicgood1_1_.production_Year as col_10_0_,
samapelpar2_.ID_SourceInfo as col_11_0_,
samapelpar2_.Pn as col_12_0_,
storehouse3_.expire_Date as col_13_0_,
publicgood1_1_.Status_Id as col_14_0_,
baseinform12_.Topic as col_15_0_,
publicgood1_.public_Num as col_16_0_,
cast(publicgood1_1_.goods_Status as number(10, 0)) as col_17_0_,
publicgood1_1_.goods_Status as col_18_0_,
publicgood1_1_.deleted as col_19_0_
from amd.Core_StoreHouse_Inventory_Item storehouse0_,
amd.Core_STOREHOUSE_INVENTORY storehouse3_,
amd.Core_STOREHOUSE storehouse5_,
amd.SMP_SAMAPEL_CODE samapelite10_
cross join amd.Core_Goods_Item_Public publicgood1_
inner join amd.Core_Goods_Item publicgood1_1_
on publicgood1_.Goods_Item_Id = publicgood1_1_.Id
left outer join amd.SMP_SOURCEINFO samapelpar2_
on publicgood1_1_.Samapel_Part_Number_Id =
samapelpar2_.ID_SourceInfo, amd.App_BaseInformation
baseinform12_
where not exists
(select ssec.samapelITem_id
from core_security_samapelitem ssec
inner join core_goods_item g
on ssec.samapelitem_id = g.samapel_item_id
where not exists (SELECT aa.groupid
FROM app_actiongroup aa
where aa.groupid in
(select au.groupid
from app_usergroup au
where au.userid = 1)
and aa.actionid = 9054)
and ssec.isenable = 1
and storehouse0_.goods_Item_ID = g.id)
and not exists
(select *
from CORE_POWER_SECURITY cps
where not exists (SELECT aa.groupid
FROM app_actiongroup aa
where aa.groupid in
(select au.groupid
from app_usergroup au
where au.userid = 1)
and aa.actionid = 9055)
and cps.inventory_id =
storehouse0_.storeHouse_Inventory_Id
and cps.goodsitemtype = 6)
and storehouse0_.storeHouse_Inventory_Id = storehouse3_.Id
and storehouse3_.storeHouse_ID = storehouse5_.Id
and storehouse3_.samapel_Item_Id = samapelite10_.MAINCODE
and publicgood1_1_.Status_Id = baseinform12_.ID
and 1 <> 2
and storehouse0_.goods_Item_ID = publicgood1_.Goods_Item_Id
and publicgood1_1_.edited = 0
and publicgood1_1_.deleted = 0
and (exists (select storehouse13_.Id
from amd.Core_STOREHOUSE storehouse13_
cross join amd.core_power power16_
cross join amd.core_power power17_
where storehouse5_.powerID = power16_.Id
and storehouse13_.powerID = power17_.Id
and (storehouse13_.Id in (741684217))
and storehouse13_.storeHouseType = 2
and (power16_.hierarchiCode like
power17_.hierarchiCode || '%')) or
(storehouse3_.storeHouse_ID in (741684217)) and
storehouse5_.storeHouseType = 1)
and (storehouse5_.storeHouse_Status not in (2, 3))
order by storehouse3_.samapel_Item_Id)
where rownum <= 10
[Note: This query is generated by Hibernate].
It is clear that order by 40 million holds so much time.
I find the main issue of this query. I omitted the “order by” and run the query, its response time was reduced to about 5 second. I was wonderful why the “order by” affects so much the response time.
(Some body may think that if this table is partitioned or use another facility of oracle, it may get better response time. Ok it may be right but my emphasis is the “order by” performance. If there is a way that do the “order by” responsibility, why not to do it). Any way I am not able to omit the “order by” because the Customer needs to order and it is necessary for paging. I find a solution that is explained by an example. This solution I order only some records that is needed. How, I will explain later. It is clear when oracle wants to sort 40 million records, it naturally takes so much time. I replace “order by” with “where clause”. With doing this replacement the response time was reduces from 2 minute to about 5 second and this is very exciting for me.
I explain my solution via an example, anybody that read this Post tells me whether this solution is good or there are another solution that I do not know exists.
Another hand I have a solution that is explained later, if it is ok or not. Whether I use it or not.
I explain my solution:
Let’s assumed that there are two table as below:
Post table
Id Others fields
1
2
3
4
5
… …
Post_comment table
Id post_id
1 5
2 5
3 5
4 5
6 5
7 2
8 2
9 2
10 3
11 1
12 1
13 1
14 1
15 1
16 1
17 1
18 1
19 1
20 1
21 1
22 1
23 1
24 1
25 1
26 4
27 4
There is a form that shows the result of join between POST table and POST_COMMENT table.
I explain both query with “order by” all records of that table and “order by” only specific records that are needed. The result of two query are exactly the same but the response time of second approach is the better than that one.
You assume that the page size is 10 and you are in page 3.
The first query with the “order by” all records of that table:
select *
from (Select res.*, rownum as rownum_
from (Select * from POST_COMMENT Order by post_id asc) res
Where rownum <= 30)
where rownum_ > 20
The second solution:
Before execution the query, I query as below:
select *
from (select post_id, count(id) from POST_COMMENT group by post_id)
order by post_id asc
So the result of it is the below:
Post_id Count(id) Sum(count(id))
1 15 15
2 3 18
3 1 19
4 2 21
5 5 26
It needs to say that the third column that is "Sum(count(id))" is calculated after that query.Any entry of this column is sum all before records.
So there is a formula that specifics which post_id must be selected. The formula is the below:
pageSize = 10, pageNumber = 3
from : (pageNumber – 1) * pageCount  2 * 10 = 20
to : (pageNumber – 1) * pageCount + pageCount  20 + 10 = 30
So I need the posts that are between (20, 30] of Sum(count(id)). According to this, I need only two post_id that have value 4,5. According to this the main query of second approach is:
select *
from (select rownum as rownum_, res.*
from (select *
from (select * from POST_COMMENT where post_id in (4, 5))
order by post_id asc) res
where rownum <= 30)
where rownum_ > 20
If you look at both query, you will see the biggest difference. The second query only selects the records of POST_COMENT that have post_id that are 4 and 5. After that, orders this records not all records of that table.
After posting this post, I have searched. finally I am redirected to HERE . I can reach to the response time that is very excited for me. It is reduced from 3 minutes to less than 3 seconds. It is necessary to know, I only use one tip from all of the query optimization guidelines that are in that site that is Duplicate constant condition for different tables whenever possible.
Note: Before doing this tip, there are some indexs on fields that are in where-clause and order-by.

SUM with distinct multiple lines

DB: ORACLE
Hi guys. I am constructing a query and I have the follow situation:
My table
---------------------------------------
Risk Risk Factor Control
---------------------------------------
RK 1 RF 1 Control 1
RK 1 RF 1 Control 2
RK 2 RF 3 Control 1
---------------------------------------
So I'd like to sum how much Risks Factors I have per risks e how much controls I have per Risk too.
Result
--------------------------------------
Risk SUM RF SUM Control
--------------------------------------
RK 1 1 2
RK 2 1 1
--------------------------------------
Does anyone knows how to fix this problem?
Kind Regards
I tried a simple sum. I created a view when a have the relation between Risk Factor and Control so I made a join with risk table, example:
SELECT RK.NAME,
SUM(CASE WHEN RFC.RISKFACTOR IS NOT NULL THEN 1 ELSE 0) SUM_RK,
SUM(CASE WHEN RFC.CONTROL IS NOT NULL THEN 1 ELSE 0) SUM_CONTROL
FROM T_RISK RK
JOIN V_RF_CONTROL RFC
ON RFC.RELATIONID = RK.RISKID
You don't need to sum here - you just need to count the distinct values:
SELECT RK.NAME,
COUNT(DISTINCT RFC.RISKFACTOR) SUM_RK,
COUNT(DISTINCT RFC.CONTROL) SUM_CONTROL
FROM T_RISK RK
JOIN V_RF_CONTROL RFC ON RFC.RELATIONID = RK.RISKID

How to select two max value from different records that has same ID for every records in table

i have problem with this case, i have log table that has many same ID with diferent condition. i want to select two max condition from this. i've tried but it just show one record only, not every record in table.
Here's my records table:
order_id seq status____________________
1256 2 4
1256 1 2
1257 0 2
1257 3 1
Here my code:
WITH t AS(
SELECT x.order_id
,MAX(y.seq) AS seq2
,MAX(y.extern_order_status) AS status
FROM t_order_demand x
JOIN t_order_log y
ON x.order_id = y.order_id
where x.order_id like '%12%'
GROUP BY x.order_id)
SELECT *
FROM t
WHERE (t.seq2 || t.status) IN (SELECT MAX(tt.seq2 || tt.status) FROM t tt);
this query works, but sometime it gave wrong value or just show some records, not every records.
i want the result is like this:
order_id seq2 status____________________
1256 2 4
1257 3 2
I think you just want an aggregation:
select d.order_id, max(l.seq2) as seq2, max(l.status) as status
from t_order_demand d join
t_order_log l
on d.order_id = l.order_id
where d.order_id like '%12%'
group by d.order_id;
I'm not sure what your final where clause is supposed to do, but it appears to do unnecessary filtering, compared to what you want.

Resources