join condition with or - oracle

i have a form which contains
two database blocks A and B.
they are related by master detail (join) condition:
A.account=B.account
account is a type varchar2(10).
it works fine.
but the problem is that column (account )in
table B may contain data of length(5) which equal another column in table A called (subacc).
how to fetch all data as the below conditions:
A.account=B.account or A.subacc=B.account

You need such a join condition, in which each paranthesed terms are mutually exclusive, should be added to the Relations node of Master Data Block :
( A.account=B.account AND LENGTH(B.account)>5 )
OR ( A.subacc=B.account AND LENGTH(B.account)<=5 )
As all those columns have not NULL values, then no extra condition is needed to filter out whether any of the columns is NULL.

Related

Tips to sum values and ignore all filter but the fields of two table in Dax?

I have 3 dimensions tables and one fact Table Sales
DimCalendar (Fields Year/Month/Day/Week)
DimCountry (Field : CountryName)
DimManager (Field ManagerName)
FctSales (Field : Amount)
I want to create a measure to Sum the Amount of the Sales (FctSales) and filter only to the fields of the tables DimCalendar and DimCountry.
After research, i was thinking about the function AllExcept, like :
CALCULATE(SUM(Sales[Amt]);ALLExcept(Sales;Country[Country];Calendar[Year]...)
but if i do that, i will have to write every columns of the table Calendar and Table Country in the AllExcept, i am wondering if there is another solution.
Maybe using REMOVEFILTERS() to remove every filter and then put back the filters over DimCountry and DimCalendar might work?
CALCULATE (
SUM ( Sales[Amt] );
REMOVEFILTERS ();
VALUES( DimCountry[CountryName] );
VALUES( DimCalendar[Date] )
)
DimCalendar[Date] should be the column used for the relationship with Sales.
This measure first evaluates the filter arguments in the current filter context.
Using as filter the columns used for the relationships guarantees that whatever the column used for filtering this would be mapped over the relationship.
Then, the REMOVEFILTERS() will remove any existing context filter and eventually the filter arguments evaluated during the first step will be applied, putting back any filtering that was set over DimCalendar and DimCountry.

Sorting Cassandra Query Output Data

I am sure this is the most common problem with Cassandra.
Nevertheless:
I have this example table:
CREATE TABLE test.test1 (
a text,
b text,
c timestamp,
id uuid,
d timestamp,
e decimal,
PRIMARY KEY ((a),c, b, id)) WITH CLUSTERING ORDER BY (b ASC, compartment ASC);
My query:
select b, (toUnixTimestamp(d) - toUnixTimestamp(c))/1000/60/60/24/365.25 as age from test.test1 where a = 'x' and c > -2208981600000 ;
This works fine but I can't get the data sorted by column b which I need. I need all the entries in column b and their corresponding 'age's.
eg:
select b, (toUnixTimestamp(d) - toUnixTimestamp(c))/1000/60/60/24/365.25 as age from test.test1 where a = 'x' and c > -2208981600000 order by b;
gives the error:
InvalidRequest: Error from server: code=2200 [Invalid query] message="Order by currently only supports the ordering of columns following their declared order in the PRIMARY KEY"
I have tried different orders in the clustering columns and different options in the partition key but I get caught by some logic and just can't seem to outwit Cassandra to get what I want. If I get the sort order I want, I loose the ability to filter on column 'c'.
Is there some logic I am not applying here, or alternatively, what must I omit(?) to get a list of entries in column b with the corresponding age.
Short answer - it's impossible to sort data on arbitrary column using CQL, even if it's a part of the primary key. Cassandra sorts data first by first clustering column, then inside it by second, etc. (see this answer).
So the only workaround right now is to fetch all data & sort on the client side.

update table from chilled table with group by in oracle

i have two tables master and other is child. suddenly i have lost voucher_date in master table. now how can i update it from child table where many record inserted against one voucher_number.
i have tried the query
update salem set (vch_date,vch_temp)=(
SELECT
vch_date,
vch_no
FROM sale where salem.vch_no=sale.vch_no GROUP BY vch_no,vch_date);
but i got message
SQL Error: ORA-01427: single-row subquery returns more than one row
01427. 00000 - "single-row subquery returns more than one row"
Regard.
GROUP BY you used is, actually, DISTINCT applied to selected column list. It appears that - for a certain vch_no which establishes relation between those two tables - you don't have a distinct vch_date + vch_no combination, which also means that there are several vch_date values for each vch_no. What to do? Pick one, for example maximum.
Also, you're setting salem.vch_temp to sale.vch_no which is pointless as vch_no from sale is equal to vch_no from salem so you can set salem.vch_temp to salem.vch_no.
UPDATE salem m SET
m.vch_date = (SELECT MAX(s.vch_date)
FROM sale s
WHERE m.vch_no = s.vch_no
),
m.vch_temp = m.vch_no;

Avoid duplicate values for certain column in DAX query

I am using the following statement to get a result table:
EVALUATE
(
CALCULATETABLE
(
ADDCOLUMNS (
'Case',
"Casenumber", RELATED( 'CaseDetails'[nr]),
),
'Case'[Date] <= value(#dateto) )
)
However, I want to only get one record pr casenumber. In SQL I would solve this with a GROUP BY statement, but how should I do this in DAX? Case also has a dimkey, so several cases with the same casenumber can have different dimkeys.
Try this:
EVALUATE
CALCULATETABLE(
SUMMARIZE(
Case
,<comma-separated list of fields from Case you want>
,"CaseNumber"
,RELATED(CaseDetails[nr])
)
,Case[Date] <= VALUE(#dateto)
)
SUMMARIZE() takes a table as its first argument, then a comma-separated list of fields from that table and any tables that it is related to where it is on the many side (thus in a star schema, SUMMARIZE()ing the fact table will allow you to refer directly to any dimension table field), followed by a comma-separated list of , pairs where is a quoted field name and is a scalar value which is evaluated in the row context of the table in the first argument.
If you don't need to rename CaseDetails[nr], then the query would look like this (just for an illustrative example):
EVALUATE
CALCULATETABLE(
SUMMARIZE(
Case
,<comma-separated list of fields from Case you want>
,CaseDetails[nr]
)
,Case[Date] <= VALUE(#dateto)
)
In such a query, all fields will come through with column headings in the format of 'table'[field], so there is no ambiguity if you have identical field names in multiple related tables.
Edit to address new information in original:
SUMMARIZE(), just like SQL's GROUP BY clause will not eliminate distinct values from the resultset. If there is a field that is a higher cardinality than the field you want to group by, you will always see duplicates.
Is your [DimKey] necessary in the resultset? If yes, then there's no way to decrease the size of your resultset smaller than the number of distinct values of [DimKey].
If [DimKey] is unnecessary, simply omit it from the list of fields in SUMMARIZE().
If you want only a specific [DimKey], e.g. the most recent (assuming it's an IDENTITY field and the max value is the latest), then you can bring it in with another ADDCOLUMNS() wrapped around your current SUMMARIZE():
EVALUATE
ADDCOLUMNS(
SUMMARIZE(
Case
,<comma-separated list of fields from Case except for [DimKey]>
,"CaseNumber"
,RELATED(CaseDetails[nr])
)
,"MaxDimKey"
,CALCULATE(MAX(Case[DimKey]))
)

Performance for validating data against various database tables

My Problem:
I "loop" over a table into a local structure named ls_eban..
and with those information I must follow these instructions:
ls_eban-matnr MUST BE in table zmd_scmi_st01 ( 1. control Table (global) )
ls_eban-werks MUST BE in table zmd_scmi_st05 ( 2. control Table (global) )
ls_eban-knttp MUST BE in table zmd_scmi_st06 ( 3. control Table (global) )
I need a selection that is clear and performant. I actually have one, but it isn't performant at all.
My solution:
SELECT st01~matnr st05~werks st06~knttp
FROM zmd_scmi_st01 AS st01
INNER JOIN zmd_scmi_st05 AS st05
ON st05~werks = ls_eban-werks
INNER JOIN zmd_scmi_st06 AS st06
ON knttp = ls_eban-knttp
INTO TABLE lt_control
WHERE st01~matnr = ls_eban-matnr AND st01~bedarf = 'X'
AND st05~bedarf = 'X'.
I also have to say, that the control tables doesn't have any relation with each other (no primary key and no secondary key).
The first thing you should not do is have the select inside the loop. Instead of
loop at lt_eban into ls_eban.
Select ....
endloop.
You should do a single select.
if lt_eban[] is not initial.
select ...
into table ...
from ...
for all entries in lt_eban
where ...
endif.
There may be more inefficiencies to be corrected if we had more info ( as mentioned in the comment by vwegert. For instance, really no keys on the control tables? ) but the select in a loop is the first thing that jumps out at me.

Resources