I have an MS Access Database table that records communication status of values from several meters. The data is logged directly to the table, but I need to make sure that the table is populating. From the sample data you can see that the Comm columns doesn't read false or 0, so I want to return a log whenever the difference between now and "Date / Time" is greater than 5 minutes.
Date / Time FCB Comm BOF Comm EAF Comm FGP Comm
9/6/2011 10:29:10 1 1 1 1
9/6/2011 10:28:01 1 1 1 1
9/6/2011 10:27:11 1 1 1 1
9/6/2011 10:26:20 1 1 1 1
9/2/2011 08:17:01 1 1 1 1
9/2/2011 08:16:10 1 1 1 1
9/2/2011 08:15:02 1 1 1 1
9/2/2011 08:14:08 1 1 1 1
I wanted to know if anyone could tell me if this could like a reasonable query to run?
SELECT Data.[Date / Time], Data.[Ford Chiller Building Comm Okay],
Data.[Basic Oxygen Furnace Comm Okay], Data.[Electro-Arc Furnace Comm Okay],
Data.[J-9 Shop Comm Okay], Data.[Ford Glass Plant Comm Okay]
FROM Data
where DateDiff("n",now(), Data.[Date / Time] ) < 5;
You need something running continuously that generates a notification whenever expected data doesn't appear, and there's a couple of approaches you can take to do that.
One is to continuously run a query like the one you have above, checking the most recent date in the table against the value of the now() function.
Another approach is to take the latest date in your table, wait (sleep) for 5 minutes, and then check the table again for any newer entries. My expectation is that this approach will generate fewer hits on your table.
You could also just check the most recent date every 5 minutes regardless of the previous time checked and see if data hasn't come in.
You need to set up your notification loop first, then you can experiment with different approaches.
all you should really need to do is return the number of rows in the table whose timestamp is within 5 minutes of now(). You shouldn't need additional row detail, just is the count 0 or not?
Related
I have someting like this
id day descrition
1 1 hi
1 1 today
1 1 is a beautifull
1 1 day
1 2 exemplo
1 2 for
1 2 this case
I need to do a funtion that for each day concatenate the descrtiomn colunm and return the result like this
id day descrition
1 1 hi today is a beautifull thay
1 2 exemplo for this case
Anny ideia about how can i do this usisng a loop in a function in oracle
You need a way of determining which order the values should be aggregated. The snippet below will rely on the implicit order in which Oracle reads the rows from the datafiles - if you have row movement enabled then you may get inconsistent results as the rows can be read in different orders as they are relocated in the underlying datafiles.
SELECT LISTAGG( description, ' ' ) WITHIN GROUP ( ORDER BY ROWNUM ) AS description
FROM your_table
GROUP BY id, day
It would be better to have another column that stores the order within each day.
I am a beginner for Spotfire. I have a problem about the difference calculation for the some column value. A sample table could be like this:
id timestamp state
1 7/1/2016 12:00:01 AM 1
2 7/1/2016 12:00:03 AM 0
3 7/1/2016 12:00:04 AM 1
4 7/1/2016 12:00:06 AM 0
5 7/1/2016 12:00:09 AM 1
6 7/1/2016 12:00:10 AM 0
7 7/1/2016 12:00:12 AM 1
I want to calculate the time difference for the timestamp when the state is 1,
the final table I want to have is:
id timestamp state time_diffence
3 7/1/2016 12:00:04 AM 1 3
5 7/1/2016 12:00:09 AM 1 5
7 7/1/2016 12:00:12 AM 1 3
it seems that I should identify an expression for the calculation, but I have not idea for the calculation just for one parameter :(. somebody could help me ?
still one more small question: what if the timestamp column value is just number value, how can i calculate the difference, is there any related function like DateDiff() here? for example:
id times state
1 12 1
2 7 0
3 10 1
4 11 0
5 6 1
6 9 0
7 7 1
the result could be :
id times state diffence
3 10 1 -2
5 6 1 -4
7 7 1 1
after running the code: i have the error as below:
for the row if it has the same time stamp as the last previous row, the difference will keep same as before, but actually the difference for the rows which have same time stamp would be as 0
thanks for your help :)
Assuming your data is sorted in ascending order by [timestamp] before you import it, you can partition using the Previous function with Over where the [state]=1.
Insert a calculated column with this expression:
If([state]=1,DateDiff("ss",Min([timestamp]) OVER (Previous([timestamp])),[timestamp]))
You will see it populated in your table like the below:
Then if you ONLY want to see the rows that have the difference you mentioned, on your table you can...
Right Click > Properties > Data > Limit data using expression >
And insert the expression: [time_difference] > 1
This will result in this table:
I have a table in oracle database which may contain amounts >=$10M or <=$-10B.
99999999.99 chunks and also include remainder.
If the value is less than or equal to $-10B, I need to break into one or more 999999999.99 chunks and also include remainder.
Your question is somewhat unreadable, but unless you did not provide examples here is something for start, which may help you or someone with similar problem.
Let's say you have this data and you want to divide amounts into chunks not greater than 999:
id amount
-- ------
1 1500
2 800
3 2500
This query:
select id, amount,
case when level=floor(amount/999)+1 then mod(amount, 999) else 999 end chunk
from data
connect by level<=floor(amount/999)+1
and prior id = id and prior dbms_random.value is not null
...divides amounts, last row contains remainder. Output is:
ID AMOUNT CHUNK
------ ---------- ----------
1 1500 999
1 1500 501
2 800 800
3 2500 999
3 2500 999
3 2500 502
SQLFiddle demo
Edit: full query according to additional explanations:
select id, amount,
case
when amount>=0 and level=floor(amount/9999999.99)+1 then mod(amount, 9999999.99)
when amount>=0 then 9999999.99
when level=floor(-amount/999999999.99)+1 then -mod(-amount, 999999999.99)
else -999999999.99
end chunk
from data
connect by ((amount>=0 and level<=floor(amount/9999999.99)+1)
or (amount<0 and level<=floor(-amount/999999999.99)+1))
and prior id = id and prior dbms_random.value is not null
SQLFiddle
Please adjust numbers for positive and negative borders (9999999.99 and 999999999.99) according to your needs.
There are more possible solutions (recursive CTE query, PLSQL procedure, maybe others), this hierarchical query is one of them.
I would like to try and sort this data by descending number of events and from latest date, grouped by ID
I have tried proc sql;
proc sql;
create table new as
select *
from old
group by ID
order by events desc, date desc;
quit;
The result I currently get is
ID Date Events
1 09/10/2015 3
1 27/06/2014 3
1 03/01/2014 3
2 09/11/2015 2
3 01/01/2015 2
2 16/10/2014 2
3 08/12/2013 2
4 08/10/2015 1
5 09/11/2014 1
6 02/02/2013 1
Although the dates and events are sorted descending. Those IDs with multiple events are no longer grouped.
Would it be possible to achieve the below in fewer steps?
ID Date Events
1 09/10/2015 3
1 27/06/2014 3
1 03/01/2014 3
3 01/01/2015 2
3 08/12/2013 2
2 09/11/2015 2
2 16/10/2014 2
4 08/10/2015 1
5 09/11/2014 1
6 02/02/2013 1
Thanks
It looks to me like you're trying to sort by descending event, then by either the earliest or latest date (I can't tell which one from your explanation), also descending, and then by id. In your proc sql query, you could try calculating the min or max of the Date variable, grouped by event and id, and then sort the result by descending event, the descending min/max of the date, and id.
I want to transform a Hive table by aggregating based on averages. However, I don't want the average value of an entire column, I want the average of the records in that column that have the same type in another column.
Here's an example, easier than trying to explain:
TABLE I HAVE:
Timestamp CounterName CounterValue MaxCounterValue MinCounterValue
00:00 Counter1 3 3 100:00 Counter2 4 5 2
00:00 Counter3 1 4 1
00:00 Counter4 6 6 100:05 Counter1 3 5 200:05 Counter2 2 2 200:05 Counter3 4 5 400:05 Counter4 6 6 5.......
TABLE I WANT:
CounterName AvgCounterValue MaxCounterValue MinCounterValue
Counter1 3 5 1Counter2 3 5 2Counter3 2.5 5 1Counter4 6 6 1
So I have a list of a bunch of counters, which each have multiple records (one per 5 minute time period). Every time each counter is logged, it has a value, a max value during that 5 minutes, and a min value. I want to aggregate this huge table so that it just has one record for each counter, which records the overall average value for that counter from all the records in the table,and then the overall min/max value of the counter in the table.
The reason this is difficult is because all the documentation says is how to aggregate by the average of a column in one table - I don't know how to split it up in groups.
Here's the script I've started with:
FROM HighCounters INSERT OVERWRITE TABLE MdsHighCounters
SELECT
HighCounters.CounterName AS CounterName,
HighCounters.CounterValue AS CounterValue
HighCounters.MaxCounterValue AS MaxCounterValue,
HighCounters.MinCounterValue AS MinCounterValue
GROUP BY HighCounters.CounterName;
And I don't know where to go from there... any ideas? Thanks!!
I think I solved my own problem:
FROM HighCounters INSERT OVERWRITE TABLE MdsHighCounters
SELECT
HighCounters.CounterName AS CounterName,
AVG(HighCounters.CounterValue) AS CounterValue,
MAX(HighCounters.MaxCounterValue) AS MaxCounterValue,
MIN(HighCounters.MinCounterValue) AS MinCounterValue
GROUP BY HighCounters.CounterName;
Does this look right to you?