Creating Charts in apex - oracle

how do I show in a pie chart the percentage of the total value. eg lets say I have total sick leave which is 30 days and I take 15 days, how in a pie I show 15/30

Generally speaking, to get a percentage value, you'd divide those two values and multiply them by 100 (and, possibly, round the result to 0, 1 or 2 decimals).
That would be - in your example - 50%, right?
SQL> select round(15 / 30 * 100, 2) from dual;
ROUND(15/30*100,2)
------------------
50
SQL>
Chart query expects 3 columns to be specified, e.g.
select null as link,
what as label,
number_of_days as value
from some_table
where some_condition
If you could provide test case so that we'd see what you really have, we'd be able to suggest something more.

Related

clickhouse - how get count datetime per 1minute or 1day ,

I have a table in Clickhouse. for keep statistics and metrics.
and structure is:
datetime|metric_name|metric_value
I want to keep statistics and limit number of accesses in 1 minute, 1 hour, 1 day and so on. So I need event counts in last minute, hour or day for every metric_name and I want to prepare statistics in a chart.
I do not know how to make a query. I get the count of metrics statistics based on the exact for example 1 minute, 1 hour, 1 day and so on.
I used to work on inflxdb:
SELECT SUM(value) FROM `TABLE` WHERE `metric_name`=`metric_value` AND time >= now() - 1h GROUP BY time(5m) fill(0)
In fact, I want to get the number of each metric per 5 minutes in the previous 1 hour.
I do not know how to use aggregations for this problem
ClickHouse has functions for generating Date/DateTime group buckets such as toStartOfWeek, toStartOfHour, toStartOfFiveMinute. You can also use intDiv function to manually divide value ranges. However the fill feature is still in the roadmap.
For example, you can rewrite the influx sql without the fill in ClickHouse like this,
SELECT SUM(value) FROM `TABLE` WHERE `metric_name`=`metric_value` AND
time >= now() - 1h GROUP BY toStartOfFiveMinute(time)
You can also refer to this discussion https://github.com/yandex/ClickHouse/issues/379
update
There is a timeSlots function that can help generating empty buckets. Here is a working example
SELECT
slot,
metric_value_sum
FROM
(
SELECT
toStartOfFiveMinute(datetime) AS slot,
SUM(metric_value) AS metric_value_sum
FROM metrics
WHERE (metric_name = 'k1') AND (datetime >= (now() - toIntervalHour(1)))
GROUP BY slot
)
ANY RIGHT JOIN
(
SELECT arrayJoin(timeSlots(now() - toIntervalHour(1), toUInt32(3600), 300)) AS slot
) USING (slot)

Hive SELECT col, COUNT(*) mismatch

Let me start by saying, I am very new to Hive, so I'm not sure what information folks will need to help me out. Please let me know what information would be useful. Also, while I'd usually create a small dataset to recreate the problem with, I think this problem has to do with the scale of my dataset, because I can't seem to recreate the problem on a smaller dataset. Let me know if you have suggestions to make this more easy to answer.
Okay now that's out of the way, here's my problem. I have a huge dataset, partitioned by month, with about 500 million rows per month. I have a column with an ID number in it (I'll call it idcol), and I want to closely examine a couple of examples where there's a high number of repeated IDs and a very low number. So, I used this:
SELECT idcol, COUNT(*) FROM table WHERE month = 7 GROUP BY idcol LIMIT 10;
And got:
000005185884381 13
000035323848000 24
000017027256315 531
000010121767109 54
000039844553332 3
000013731352481 309
000024387407996 3
000028461234451 67
000016564844672 1
000032933040806 17
So, I went to investigate the first idvar with a count of 3, with:
SELECT * FROM table WHERE month = 7 AND idcol = '000039844553332';
I expected to see just 3 rows, but ended up with 469 rows found! That was strange enough, but then I just happened to run the original line of code above but with LIMIT 5 instead and ended up with:
000005185884381 13
000017027256315 75
000010121767109 25
000013731352481 59
000024387407996 1
And, it may be hard to see because the idcol is so long, but idvar 000017027256315 ended up with a count of 531 when I did LIMIT 10 and just 75 when I did LIMIT 5.
What am I missing?! How can I get a correct count of just a small number of values so I can investigate further?!
BTW my first thought was to make the counting part a sub-query, but that didn't change a thing. I used:
SELECT * FROM (SELECT idcol, COUNT(*) FROM table WHERE month = 7 GROUP BY idcol) x LIMIT 10;
...same EXACT results
Most likely the counts are being computed from statistics.See here for the bug and the related discussion.
hive.compute.query.using.stats = FALSE
If this doesn't fix it try the ANALYZE command before running the count(*)
ANALYZE TABLE table_name PARTITION(month) COMPUTE STATISTICS;

Split amount into multiple rows if amount>=$10M or <=$-10B

I have a table in oracle database which may contain amounts >=$10M or <=$-10B.
99999999.99 chunks and also include remainder.
If the value is less than or equal to $-10B, I need to break into one or more 999999999.99 chunks and also include remainder.
Your question is somewhat unreadable, but unless you did not provide examples here is something for start, which may help you or someone with similar problem.
Let's say you have this data and you want to divide amounts into chunks not greater than 999:
id amount
-- ------
1 1500
2 800
3 2500
This query:
select id, amount,
case when level=floor(amount/999)+1 then mod(amount, 999) else 999 end chunk
from data
connect by level<=floor(amount/999)+1
and prior id = id and prior dbms_random.value is not null
...divides amounts, last row contains remainder. Output is:
ID AMOUNT CHUNK
------ ---------- ----------
1 1500 999
1 1500 501
2 800 800
3 2500 999
3 2500 999
3 2500 502
SQLFiddle demo
Edit: full query according to additional explanations:
select id, amount,
case
when amount>=0 and level=floor(amount/9999999.99)+1 then mod(amount, 9999999.99)
when amount>=0 then 9999999.99
when level=floor(-amount/999999999.99)+1 then -mod(-amount, 999999999.99)
else -999999999.99
end chunk
from data
connect by ((amount>=0 and level<=floor(amount/9999999.99)+1)
or (amount<0 and level<=floor(-amount/999999999.99)+1))
and prior id = id and prior dbms_random.value is not null
SQLFiddle
Please adjust numbers for positive and negative borders (9999999.99 and 999999999.99) according to your needs.
There are more possible solutions (recursive CTE query, PLSQL procedure, maybe others), this hierarchical query is one of them.

OBIEE using the same folder/fact twice aggregating on both

I know the exact SQL I would need to write to retrieve the results I'm looking for from the Oracle BI tool, however, as I am new to Oracle BI I am struggling to find a way to reproduce the same results. I realize that the ultimate answer largely depends on the BI data model and that takes a lot more communication than a question on Stack Overflow will allow, so I'm looking for more generic how-to answers than a specific definitive answer for my scenario.
Perhaps the SQL will help for starters:
select "All"."DT", ("LessThan5Mins"."Count" / "All"."Count") * 100
from
(
select to_char(m."EndDateTime", 'YYYY-MM') "DT", count(*) "Count"
from "Measurement" m,
"DwellTimeMeasurement" dtm
where dtm."MeasurementBase_id" = m."Id"
group by to_char(m."EndDateTime", 'YYYY-MM')
) "All",
(
select to_char(m."EndDateTime", 'YYYY-MM') "DT", count(*) "Count"
from "Measurement" m,
"DwellTimeMeasurement" dtm
where dtm."MeasurementBase_id" = m."Id"
and m."MeasValue" <= 300
group by to_char(m."EndDateTime", 'YYYY-MM')
) "LessThan5Mins"
where "All"."DT" = "LessThan5Mins"."DT";
The purpose of this is to return the percentage of dwell time records that were less than or equal to 5 mins (300 seconds).
I have a fact that represents the "MeasValue" field in the above query.
All attempts I've made to reproduce the dual result set nature of the above query in BI have failed.
Is the above possible in OBIEE and if so, how might I achieve this?
I'm assuming that you have imported the Measurement (M) and DwellTimeMeasurement (DTM) tables into the physical layer of the RPD, specified the join on DTM.MeasurementBase_id = M.Id, and then brought them both through to the presentation layer.
If so, then you could start building this query in Answers on the criteria tab by dragging in M.EndDateTime and any OBIEE measure column from DTM, for example DTM.Amount. Edit the formula for the DTM.Amount column:
Filter the column by clicking the filter button shown in blue below.
In the following dialog box double click on M.MeasValue and then select "is less than or equal to" and type 300 in the Value text box. Click OK twice and your column formula should now look something like this:
FILTER(DTM.Amount USING (M.MeasValue <= 300))
Now wrap this with COUNT():
COUNT(FILTER(DTM.Amount USING (M.MeasValue <= 300)))
This will give the count of records with M.MeasValue <= 300. You could rename this column to be "LessThan5Mins". Click OK to save the new formula. Now drag in the DTM.Amount column again but this time only perform a COUNT():
COUNT(DTM.Amount)
This will give you the count of all dwell time records. You could rename this to "All". Finally drag in the DTM.Amount column one last time and edit it's formula again. This is where you will calculate the percentage with a formula similar to the following:
COUNT(FILTER(DTM.Amount USING (M.MeasValue <= 300))) / COUNT(DTM.Amount) * 100
So ultimately you will have four columns with the following titles and formulas:
TITLE FORMULA
----- --------
EndDateTime M.EndDateTime
LessThan5Mins COUNT(FILTER(DTM.Amount USING (M.MeasValue <= 300)))
All COUNT(DTM.Amount)
% LessThan5Mins COUNT(FILTER(DTM.Amount USING (M.MeasValue <= 300))) / COUNT(DTM.Amount) * 100
Note that including the EndDateTime column takes care of grouping the records. Also, to match your original query you would only need the EndDateTime and % LessThan5Mins columns (you could hide or exclude the other columns) but I wanted to demonstrate for you the process of filtering column values in OBIEE.

Stacked column Flash chart counting all values

I am building stacked column flash chart on my query. I would like to split values in column for different locations. For argument sake I have 5 ids in location 41, 3 ids in location 21, 8 ids in location 1
select
'' link,
To_Char(ENQUIRED_DATE,'MON-YY') label,
count(decode(location_id,41,id,0)) "location1",
count(decode(location_id,21,id,0)) "location2",
count(decode(location_id,1,id,0)) "location3"
from "my_table"
where
some_conditions = 'Y';
as a result of this query Apex is creating stacked column with three separate parts( hurray!), however it instead of having values 5,3 and 8, it returns three regions 16,16,16. ( 16 = 5 +3+8).
So obviously Apex is going through all decode conditions and adding all values.
I am trying to achieve something described in this
article
Apex doesn't appear to be doing anything funky, you'd get the same result running that query through SQL*Plus. When you do:
count(decode(location_id,41,id,0)) "location1",
.. then the count gets incremented for every row - it doesn't matter which column you include, and the zero is just treated as any fixed value. I think you meant to use sum:
sum(decode(location_id,41,1,0)) "location1",
Here each row is assigned either zero or one, and summing those gives you the number that got one, which is the number that had the specified id value.
Personally I'd generally use caseover decode, but the result is the same:
sum(case when location_id = 41 then 1 else 0 end) "location1",

Resources