Index column not resetting after I apply filter in Dax? - dax

I have table which shows time difference between different types of ID's for when an alert is given. The time since prev will say when the next alert is on the device ID.
my table looks like this:
Time Device ID Type TimeSincePrev(seconds) index
22.1.21 04:02:04 12 None 0 1
22.1.21 04:24:07 13 low 1320 2
22.1.21 04:26:04 14 medium 120 3
......
...
......
...
...
However When I filter by deviceID, the indexes will not reset to 1,2,3...., instead, it will go to a random number, causing the timesinceprev alert to be wrong. How do I correct so that index is correct, hopefully correcting the TimeSincePrev column. Also if there is a way to get TimeSincePrev to show seconds via the the seconds and not just via the minute like it is now
This is what the query for the index column looks like:
index= COUNTROWS(FILTER(VALUES('Table'[Time]),
'Table'[Time]<EARLIER('Table'[Time])
))+1
Here is how TimeSincePrev:
TimeSincePrev= IF(DATEDIFF(LOOKUPVALUE('Table'[Time],'Table'[index],
'Table'[index]-1),MINUTE)>720,BLANK(),DATEDIFF(LOOKUPVALUE('Table'[Time],
'Table'[index],'Table'[index]-1),'Table'[Time],MINUTE))*60
How do I get it so when I filter by device ID the indexes don't mess up like this:
Time Device ID Type TimeSincePrev(seconds) index
22.1.21 04:02:04 12 None 0 6
22.1.21 04:24:07 12 low 1320 3
22.1.21 04:26:04 12 medium 120 1
22.1.21 04:02:04 12 None 0 2
22.1.21 04:24:07 12 low 1320 4
22.1.21 04:26:04 12 medium 120 5
......
...
......
...
...

Are you using a column for this? If so, you need to create a measure that handles the indexing instead.
Columns are re-calculated only at refresh time (when you refresh the dataset), NOT at query time (when you click on slicers). The column cannot respond to your selected slicer value and calculate on the fly.

Related

calculate the time difference for same column in Spotfire

I am a beginner for Spotfire. I have a problem about the difference calculation for the some column value. A sample table could be like this:
id timestamp state
1 7/1/2016 12:00:01 AM 1
2 7/1/2016 12:00:03 AM 0
3 7/1/2016 12:00:04 AM 1
4 7/1/2016 12:00:06 AM 0
5 7/1/2016 12:00:09 AM 1
6 7/1/2016 12:00:10 AM 0
7 7/1/2016 12:00:12 AM 1
I want to calculate the time difference for the timestamp when the state is 1,
the final table I want to have is:
id timestamp state time_diffence
3 7/1/2016 12:00:04 AM 1 3
5 7/1/2016 12:00:09 AM 1 5
7 7/1/2016 12:00:12 AM 1 3
it seems that I should identify an expression for the calculation, but I have not idea for the calculation just for one parameter :(. somebody could help me ?
still one more small question: what if the timestamp column value is just number value, how can i calculate the difference, is there any related function like DateDiff() here? for example:
id times state
1 12 1
2 7 0
3 10 1
4 11 0
5 6 1
6 9 0
7 7 1
the result could be :
id times state diffence
3 10 1 -2
5 6 1 -4
7 7 1 1
after running the code: i have the error as below:
for the row if it has the same time stamp as the last previous row, the difference will keep same as before, but actually the difference for the rows which have same time stamp would be as 0
thanks for your help :)
Assuming your data is sorted in ascending order by [timestamp] before you import it, you can partition using the Previous function with Over where the [state]=1.
Insert a calculated column with this expression:
If([state]=1,DateDiff("ss",Min([timestamp]) OVER (Previous([timestamp])),[timestamp]))
You will see it populated in your table like the below:
Then if you ONLY want to see the rows that have the difference you mentioned, on your table you can...
Right Click > Properties > Data > Limit data using expression >
And insert the expression: [time_difference] > 1
This will result in this table:

BIRT interpolate data in crosstab for rows/cols without data

Crosstab interpolate data so graph 'connects the dots'
I have trouble with my crosstab or graph not interpolating the data correctly. I think this can be solved, but I'm not sure. Let me explain what I did.
I have a datacube with the rows grouping data by weeknumber and the cols grouping data per recordtype. I added to flagbits to the dataset so I can see if a record is new or old. In the datacube I added a measure to sum these bits per row/col group. Thus effectively counting new and old records per week per coltype. I also added a sum of records per coltype.
In the crosstab I added two runningsums for the new and old records sums. In the crosstab I added a datacell to calculate the actual records per row/colgroup. Thus actual records = totalcoltype - runningsum(old) + runningsum(new)
So lets say there are 20 records in this set for a coltype.
In week 1: 3 old records, 2 new records. Then the running sum becomes 3 and -2. Actual = 20 - 3 + 2 = 19 (correct)
In week 2: no data. The runningsums are not visible. Actual = 20 - null + null = 20 (wrong)
In week 3: no data. The runningsums are not visible. Actual = 20 - null + null = 20 (wrong)
In week 4: 2 old records, 1 new recored. The runningsums becomes 5 and 3. Actual = 20 - 5 + 3 = 18 (correct)
In week 5: no data. The runningsums are not visible. Actual = 20 - null + null = 20 (wrong)
In week 6: 6 old records, 2 new records. The runningsums becomes 11 and 5. Actual = 20 - 11 + 5 = 14 (correct)
In week 7: no data. The runningsums are not visible. Actual = 20 - null + null = 20 (wrong)
So the graph will display:
19
20
20
18
20
14
20
If I add a condition to the actual calculation, it becomes:
19
null
null
18
null
14
null
The graph doesn't ignore the null values, but thinks they are 0. The graph is wrong.
Is there a way to let the graph realy ignore the null values?
Another solution is that the runningsums display the last know value or just add 0 if there is no data. Any idea how to do this?
Obviously, a correct graph would read:
19
19
19
18
18
14
14

Hive Script - How to transform table / find average of certain records according to one columns name?

I want to transform a Hive table by aggregating based on averages. However, I don't want the average value of an entire column, I want the average of the records in that column that have the same type in another column.
Here's an example, easier than trying to explain:
TABLE I HAVE:
Timestamp CounterName CounterValue MaxCounterValue MinCounterValue
00:00 Counter1 3 3 100:00 Counter2 4 5 2
00:00 Counter3 1 4 1
00:00 Counter4 6 6 100:05 Counter1 3 5 200:05 Counter2 2 2 200:05 Counter3 4 5 400:05 Counter4 6 6 5.......
TABLE I WANT:
CounterName AvgCounterValue MaxCounterValue MinCounterValue
Counter1 3 5 1Counter2 3 5 2Counter3 2.5 5 1Counter4 6 6 1
So I have a list of a bunch of counters, which each have multiple records (one per 5 minute time period). Every time each counter is logged, it has a value, a max value during that 5 minutes, and a min value. I want to aggregate this huge table so that it just has one record for each counter, which records the overall average value for that counter from all the records in the table,and then the overall min/max value of the counter in the table.
The reason this is difficult is because all the documentation says is how to aggregate by the average of a column in one table - I don't know how to split it up in groups.
Here's the script I've started with:
FROM HighCounters INSERT OVERWRITE TABLE MdsHighCounters
SELECT
HighCounters.CounterName AS CounterName,
HighCounters.CounterValue AS CounterValue
HighCounters.MaxCounterValue AS MaxCounterValue,
HighCounters.MinCounterValue AS MinCounterValue
GROUP BY HighCounters.CounterName;
And I don't know where to go from there... any ideas? Thanks!!
I think I solved my own problem:
FROM HighCounters INSERT OVERWRITE TABLE MdsHighCounters
SELECT
HighCounters.CounterName AS CounterName,
AVG(HighCounters.CounterValue) AS CounterValue,
MAX(HighCounters.MaxCounterValue) AS MaxCounterValue,
MIN(HighCounters.MinCounterValue) AS MinCounterValue
GROUP BY HighCounters.CounterName;
Does this look right to you?

How to structure a db table to store ranges like 0;20;40;60;80 or 0;50;100;150;200;250;... so on

I want to store the below range in a table, the number of items in template01 is 6, second is 4. How can we structure the table to store this information. Displaying these values as comma separated or having one column for each range values stated would be my last option. Can you help on how to structure the table/tables to store the below said information.
Template01 0 10 20 30 40 50
Template02 0 50 100 150
Template03 0 100 200 300 400 500 600
Basically, I want to store these ranges as templates and use it later for saying if the luggage weight is from 0 - 10, it will cost $5 per Kg, if its 10 - 20, it will cost $4.8 per Kg etc
This is how the template will be used,
Domain: XYZ
Template01 0 10 20 30 40 50
Increment01% 125% 120% 115% 110% 105% 100%
Domain: ABC, am using or picking the same template 'Template01' but Increment%
will be different for different domain, hence
Template01 0 10 20 30 40 50
Increment02% 150% 140% 130% 120% 110% 100%
Idea is, I want to store Template of weight breaks and later I want to associate with different Increment%. Thus when the user chooses a weight break template, all the increment% values applicable or configured for that template can be shown for the user to choose one.
Template01 0 10 20 30 40 50
Increment01% 125% 120% 115% 110% 105% 100% [Choice 1]
Increment02% 150% 140% 130% 120% 110% 100% [Choice 2]
Standard approach is:
Template_Name Weight_From Weight_To Price
------------- ----------- --------- -----
Template01 0 10 5
Template01 10 20 4.8
...
Template01 40 50 ...
Template01 50 (null) ...
Template02 0 50 ...
...
Template03 500 600 ...
Template03 600 (null) ...
For a normalised schema you'd need to have tables for the Template, for the Increment, for the Template Weights, and for the Increment Factors.
create table luggage_weight_template (
luggage_weight_template_id number primary key,
template_name varchar2(100) unique);
create table luggage_increment (
luggage_increment_id number primary key,
increment_name varchar2(100),
luggage_weight_template_id references luggage_weight_template(luggage_weight_template_id));
create table template_weight (
template_weight_id number primary key,
luggage_weight_template_id references luggage_weight_template(luggage_weight_template_id),
weight_from number not null);
create table increment_factor (
increment_factor number primary key,
increment_id references luggage_increment(luggage_increment_id),
template_weight_id references template_weight(template_weight_id));

Processing Timebased values

I have a list of timebased values in the following form:
20/Dec/2011:10:16:29 9
20/Dec/2011:10:16:30 13
20/Dec/2011:10:16:31 13
20/Dec/2011:10:16:32 9
20/Dec/2011:10:16:33 13
20/Dec/2011:10:16:34 14
20/Dec/2011:10:16:35 6
20/Dec/2011:10:16:36 7
20/Dec/2011:10:16:37 16
20/Dec/2011:10:16:38 5
20/Dec/2011:10:16:39 7
20/Dec/2011:10:16:40 15
20/Dec/2011:10:16:41 12
20/Dec/2011:10:16:42 13
20/Dec/2011:10:16:43 11
20/Dec/2011:10:16:44 6
20/Dec/2011:10:16:45 7
20/Dec/2011:10:16:46 9
20/Dec/2011:10:16:47 14
20/Dec/2011:10:16:49 6
20/Dec/2011:10:16:50 11
20/Dec/2011:10:16:51 15
20/Dec/2011:10:16:52 10
20/Dec/2011:10:16:53 16
20/Dec/2011:10:16:54 12
20/Dec/2011:10:16:55 8
The second column contains value against each second. Values are there for complete month and for each and every second. I want to add these values:
Per minute basis. [for 00 - 59 seconds ]
Per hour basis [ for 00 - 59 minutes ]
Per Day basis. [ for 0 - 24 hours ]
Sounds like a job for Excel and a pivot table.
The trick is to parse the text date/time you have into something Excel can work with; splitting it on the colon will do just that. Assuming the value you have is in cell A2, this formula will convert the text into a real date:
=DATEVALUE(LEFT(A2,SEARCH(":",A2)-1))+TIMEVALUE(RIGHT(A2,LEN(A2)-SEARCH(":",A2)))
Then just create Minute, Hour and Day columns where you subtract out that portion of the date. For example, if the date from the above formula is in C2, the following will subtract out the seconds and give you just up to the minute:
=C2-SECOND(C2)/24/60/60
Then repeat the process for the next two columns to give you the hour and the day:
=D2-MINUTE(D2)/24/60
=E2-HOUR(E2)/24
Then all you have to do is create a pivot table on the data with rows Day, Hour, Minute and value Sum(Value).

Resources