I would like to get just the max date value (e.g. just the 2014 result), but if I remove the water_type from the group_by it gives an error to do with field not being part of the aggregate.
example:
this query:
SELECT Location_Code, water_type, max(Sampled_Date_Time) as maxdate
FROM [LChem1_Chemistry] lc1
where Location_Code = 'mb340'
and water_type is not null
group by Location_Code, water_type
gets this:
Location_Code water_type maxdate
MB340 Group2 2013-09-27 14:00:00
MB340 SubGroup2 2014-03-04 00:00:00
however i only want the 2014 result (but keep the water_type in the resulting table.
thanks
actually this is a better example of input data:
Location_Code water_type maxdate
MB117 Group2 2/07/2012 12:58
MB331 Group2 28/02/2013 0:00
MB340 Group2 27/09/2013 14:00
MB340 SubGroup2 4/03/2014 0:00
MB117 Group2 3/07/2012 12:58
MB331 Group2 28/05/2013 0:00
and i want rows 5,7 & 2 in the resulting table.
What if you try this query:
SELECT Location_Code, water_type, Sampled_Date_Time
FROM [LChem1_Chemistry] lc1
WHERE Location_Code = 'mb340'
AND water_type is not null
ORDER BY Sampled_Date_Time DESC
LIMIT 1
do you want only one row or only rows from 2014
If it is only rows from 2014 then you can add a where clause.
If you want only the top row , then you can have an inner query to return the water_type that you want and use it in the outer_query to return just one row
Related
Here i want to fetch results based on the maximum date from the field so in order to get that i wrote query like this
$latest_reports = Filelist::
select('report_type_id',DB::raw('filename,max(data_date) as latest_date'))
->where('access_id','=',$retailer_supplier_id->id)
->groupBy('report_type_id')
->orderBy('data_date','desc')
->get();
Here is my table please have a look
id access_id filename report_type_id data_date
1 16 filename1 6 2021-02-01
2 16 filename2 6 2021-01-01
3 16 filename3 6 2021-03-01
4 16 filename4 6 2021-04-01
Am getting result like this
id access_id filename report_type_id data_date
4 16 filename1 6 2021-04-01
I want to get result like this
id access_id filename report_type_id data_date
4 16 filename4 6 2021-04-01
Here the first rows filename value is getting..how to solve this
This is a MySQL problem I think. You have specified only one column to group by, but more then one column in the select list so what is presented in those other non-aggregating columns isn't guaranteed to be sensible. Please refer to MySQL Handling of GROUP BY
In SQL I might re-write the query this way:
select * from mytable
where data_date = (select max(data_date) from mytable)
or
select * from mytable
order by data_date
limit 1
depending on my particular needs (and I don't know which is better for you)
UPDATE:
$latest_reports = Filelist::select([
'report_type_id',
'access_id'
DB::raw('MAX(data_date) AS data_date'),
// here can be listed the other fields
])
->where('access_id', $retailer_supplier_id->id)
->groupBy('report_type_id')
->get();
INITIAL:
I had similar working query, just there I had "created_at" timestamp field. Anyway, I think this will work for you:
// assuming, that your table name is "filelist"
$latest_reports = Filelist::select(DB::raw('t.*'))
->from(DB::raw('(SELECT * FROM filelist ORDER BY data_date DESC) t'))
->groupBy('t.report_type_id')
->get();
Im puzzle as to how to build my fact and dimensions to procude the following results:
I want to count the number of occurences of logged people for each time interval.
In this case every 30 mins. It would look like this
Example: Person1 login at 10:05:00 and logout at 12:10:00
Person2 login at 10:45:00 and logout at 11:25:00
Person3 login at 11:05:00 and logout at 14:01:00
TimeStart TimeEnd People logged
00:00:00 00:30:00 0
00:30:00 01:00:00 0
...
10:00:00 10:30:00 1
10:30:00 11:00:00 2
11:00:00 11:30:00 3
11:30:00 12:00:00 2
12:00:00 12:30:00 2
12:30:00 13:00:00 1
13:00:00 13:30:00 1
13:30:00 14:00:00 1
14:00:00 14:30:00 0
...
23:30:00 00:00:00 0
So i have a DimTime and DimDate table that contain hour, halfhour, quarterhour
and i have a FactTimestamp table that has the following:
DateLoginID that points to DimDate dateID
DateLogoutID that points to DimDate dateID
TimeLoginID that points to DimTime timeID
TimeLogoutID that points to DimTime timeID
I'd like to know what kind of cube design i would need to achieve that?
Ive done it in sql if that can help:
--Create tmp table for time interval
CREATE TABLE #tmp(
StartRange time(0),
EndRange time(0),
);
--Interval set to 30 minutes
DECLARE #Interval int = 30
-- Example with #Date = 2017-07-27: Set starttime at 2017-07-27 00:00:00
DECLARE #StartTime datetime = DATEADD(HOUR,0, #Date)
--Set endtime at 2017-07-27 23:59:59
DECLARE #EndTime datetime = DATEADD(SECOND,59,DATEADD(MINUTE,59,DATEADD(HOUR,23, #Date)))
--Populate tmp table with the time interval. from midnight to 23:59:59
;WITH cSequence AS
(
SELECT
#StartTime AS StartRange,
DATEADD(MINUTE, #Interval, #StartTime) AS EndRange
UNION ALL
SELECT
EndRange,
DATEADD(MINUTE, #Interval, EndRange)
FROM cSequence
WHERE DATEADD(MINUTE, #Interval, EndRange) <= #EndTime
)
INSERT INTO #tmp SELECT cast(StartRange as time(0)),cast(EndRange as time(0)) FROM cSequence OPTION (MAXRECURSION 0);
--Insert last record 23:30:00 to 23:59:59
INSERT INTO #tmp (StartRange, EndRange) values ('23:30:00','23:59:59');
SELECT tmp.StartRange as [Interval], COUNT(ts.TimeIn) as [Operators]
FROM #tmp tmp
JOIN Timestamp ts ON
--If timeIn is earlier than StartRange OR within the start/end range
(CAST(ts.TimeIn as time(0)) <= tmp.StartRange OR CAST(ts.TimeIn as time(0)) BETWEEN tmp.StartRange AND tmp.EndRange)
AND
--AND If timeOut is later than EndRange OR within the start/end range
CAST(ts.[TimeOut] as time(0)) >= tmp.EndRange OR CAST(ts.[TimeOut] as time(0)) BETWEEN tmp.StartRange AND tmp.EndRange
GROUP BY tmp.StartRange, tmp.EndRange
END
Really any kind of hint as to how to achieve it in mdx would be greatly appreciated.
Honestly, I wouldn't do it in MDX against that table structure. Even if you succeed in getting an MDX query that returns that value, and surely it can be done, it will most likely be tremendously complex and hard to maintain and debug, and will probably require multiple passes on the fact table to get the numbers, hurting performance.
I think this is a clear cut case for a periodic snapshot table. Pick your granularity, but even at 1 min snapshots you get 1440 points of data per day for each tuple of all other dimensions. If your login/logout table is large you may need to decrease this to keep its size manageable. In the end, you get a table with time_id, count_of_logins, and whatever other keys you need to other dimensions, and the query you need is just a filter on which time periods you want (give me all hours of the day, but filter on only minutes 00 and 30 of each hour) and the count of total number of logged in users is trivial.
I have two tables which I am trying to join based on two criteria. One of the criteria is that a date from t1 is between a date in t2 and the next date in t2. The other is that the name from t1 matches the name from t2.
I.e. if t2 looks like this:
Record Name Date
1 A1234 2016-01-03 04:58:00
2 A1234 2015-12-15 08:34:00
3 A5678 2016-01-04 03:14:00
4 A1234 2016-01-05 21:06:00
Then:
Any records from t1 for Name A1234 with dates between 2016-01-03 04:58:00 and 2016-01-05 21:06:00 would be joined to record 1.
Any records from t1 for Name A1234 with dates between 2015-12-15 08:34:00 and 2016-01-03 04:58:00 would be joined to record 2
Any records from t1 for A1234 after the date of record 4 would be joined to record 4
Any records from t1 for A5678 would be joined to record 3 because there's only one date.
My initial approach is to use a correlated subquery to find the next date. However, due to a large number of records, I determined this would take over a year to execute because it searches all of t2 for the next later date during each iteration. Original SQLite:
CREATE TABLE outputtable AS SELECT * FROM t1, t2 d
WHERE t1.Name = d.Name AND t1.Date BETWEEN d.Date AND (
SELECT * FROM (
SELECT Date from t2
WHERE t2.Name = d.Name
ORDER BY Date ASC )
WHERE Date > d.Date
LIMIT 1 )
Now, I would like to find the next date only once for all records in t2 and create a new column in t2 that contains the next date. This way, I only search for the next date about 400,000 times instead of 56 billion times, significantly improving my performance.
Thus the output of the query I'm looking for would make t2 look like this:
Record Name Date Next_Date
1 A1234 2016-01-03 04:58:00 2016-01-05 21:06:00
2 A1234 2015-12-15 08:34:00 2016-01-03 04:58:00
3 A5678 2016-01-04 03:14:00 2999-12-31 23:59:59
4 A1234 2016-01-05 21:06:00 2999-12-31 23:59:59
Then I would be able to simply query whether t1.Date is between t2.Date and t2.Next_Date.
How can I build a query that will add the next date to a new column in t2?
Rather than add the new column, you should just be able to use a query like the one below to join the tables:
SELECT
T1.*,
T2_1.*
FROM
T1
INNER JOIN T2 T2_1 ON
T2_1.Name = T1.Name AND
T2_1.some_date < T1.some_date
LEFT OUTER JOIN T2 T2_2 ON
T2_2.Name = T1.Name AND
T2_2.some_date > T2_1.some_date
LEFT OUTER JOIN T2 T2_3 ON
T2_3.Name = T1.Name AND
T2_3.some_date > T2_1.some_date AND
T2_3.some_date < T2_2.some_date
WHERE
T2_3.Name IS NULL
You can do the same with NOT EXISTS, but this method often has better performance.
You can speed up (sub)queries by using proper indexes.
To check which indexes are actually used, use EXPLAIN QUERY PLAN.
Your original query, without any indexes, would be executed by SQLite 3.10.0 like this:
0|0|0|SCAN TABLE t1
0|1|1|SEARCH TABLE t2 AS d USING AUTOMATIC COVERING INDEX (name=?)
0|0|0|EXECUTE CORRELATED SCALAR SUBQUERY 1
1|0|0|SCAN TABLE t2
1|0|0|USE TEMP B-TREE FOR ORDER BY
(The "automatic" index is created temporarily just for this query; the optimizer has estimated that this would still be faster than not using any index.)
In this case, you get the most optimal query plan by indexing all columns used for lookups:
create index i1nd on t1(name, date);
create index i2nd on t2(name, date);
0|0|1|SCAN TABLE t2 AS d
0|1|0|SEARCH TABLE t1 USING INDEX i1nd (name=? AND date>? AND date<?)
0|0|0|EXECUTE CORRELATED SCALAR SUBQUERY 1
1|0|0|SEARCH TABLE t2 USING COVERING INDEX i2nd (name=? AND date>?)
I've used this method on tables with around 1 mil rows with success. Obviously, creating an index that will cover this query will help performance.
This approach uses RANK to create a value to join against. After creating the RANK in a CTE (I use this for readability reasons, please correct for style or personal preference), use a sub-query to join rnk to rnk + 1; aka the next date.
Here's an example of what the code looks like using your sample values.
IF OBJECT_ID('tempdb..#T2') IS NOT NULL
DROP TABLE #T2
CREATE TABLE #T2
(
Record INT NOT NULL PRIMARY KEY,
Name VARCHAR(10),
[DATE] DATETIME,
)
INSERT INTO #T2
VALUES (1, 'A1234', '2016-01-03 04:58:00'),
(2, 'A1234', '2015-12-15 08:34:00'),
(3, 'A5678', '2016-01-04 03:14:00'),
(4, 'A1234', '2016-01-05 21:06:00');
WITH Rank_Dates
AS (Select *
,rank() OVER(PARTITION BY #t2.name ORDER BY #t2.date DESC) AS rnk
FROM #T2)
select RD1.Record,
RD1.Name,
RD1.DATE,
COALESCE (RD2.DATE, '2999-12-31 23:59:59') AS NEXT_DATE
FROM Rank_Dates RD1
LEFT JOIN Rank_Dates RD2
ON RD1.rnk = RD2.rnk + 1
AND RD1.Name = RD2.Name
ORDER BY RD1.Record -- ORDER BY is optional
;
EDIT: added code output below.
The code above produces the following output.
Record Name DATE NEXT_DATE
1 A1234 2016-01-03 04:58:00.000 2016-01-05 21:06:00.000
2 A1234 2015-12-15 08:34:00.000 2016-01-03 04:58:00.000
3 A5678 2016-01-04 03:14:00.000 2999-12-31 23:59:59.000
4 A1234 2016-01-05 21:06:00.000 2999-12-31 23:59:59.000
On a random note. Would using the CURRENT_TIMESTAMP in place of hard coding '2999-12-31 23:59:59.000' produce a similar result?
i have problem with this case, i have log table that has many same ID with diferent condition. i want to select two max condition from this. i've tried but it just show one record only, not every record in table.
Here's my records table:
order_id seq status____________________
1256 2 4
1256 1 2
1257 0 2
1257 3 1
Here my code:
WITH t AS(
SELECT x.order_id
,MAX(y.seq) AS seq2
,MAX(y.extern_order_status) AS status
FROM t_order_demand x
JOIN t_order_log y
ON x.order_id = y.order_id
where x.order_id like '%12%'
GROUP BY x.order_id)
SELECT *
FROM t
WHERE (t.seq2 || t.status) IN (SELECT MAX(tt.seq2 || tt.status) FROM t tt);
this query works, but sometime it gave wrong value or just show some records, not every records.
i want the result is like this:
order_id seq2 status____________________
1256 2 4
1257 3 2
I think you just want an aggregation:
select d.order_id, max(l.seq2) as seq2, max(l.status) as status
from t_order_demand d join
t_order_log l
on d.order_id = l.order_id
where d.order_id like '%12%'
group by d.order_id;
I'm not sure what your final where clause is supposed to do, but it appears to do unnecessary filtering, compared to what you want.
I wonder how do I select a range of data depending on the date range?
I have these data in my payment table in format dd/mm/yyyy
Id Date Amount
1 4/1/2011 300
2 10/1/2011 200
3 27/1/2011 100
4 4/2/2011 300
5 22/2/2011 400
6 1/3/2011 500
7 1/1/2012 600
The closing date is on the 27 of every month. so I would like to group all the data from 27 till 26 of next month into a group.
Meaning to say I would like the output as this.
Group 1
1 4/1/2011 300
2 10/1/2011 200
Group 2
1 27/1/2011 100
2 4/2/2011 300
3 22/2/2011 400
Group 3
1 1/3/2011 500
Group 4
1 1/1/2012 600
It's not clear the context of your qestion. Are you querying a database?
If this is the case, you are asking about datetime but it seems you have a column in string format.
First of all, convert your data in datetime data type (or some equivalent, what db engine are you using?), and then use a grouping criteria like this:
GROUP BY datepart(month, dateadd(day, -26, [datefield])), DATEPART(year, dateadd(day, -26, [datefield]))
EDIT:
So, you are in Linq?
Different language, same logic:
.GroupBy(x => DateTime
.ParseExact(x.Date, "dd/mm/yyyy", CultureInfo.InvariantCulture) //Supposed your date field of string data type
.AddDays(-26)
.ToString("yyyyMM"));
If you are going to do this frequently, it would be worth investing in a table that assigns a unique identifier to each month and the start and end dates:
CREATE TABLE MonthEndings
(
MonthID INTEGER NOT NULL PRIMARY KEY,
StartDate DATE NOT NULL,
EndDate DATE NOT NULL
);
INSERT INTO MonthEndings VALUES(201101, '27/12/2010', '26/01/2011');
INSERT INTO MonthEndings VALUES(201102, '27/01/2011', '26/02/2011');
INSERT INTO MonthEndings VALUES(201103, '27/02/2011', '26/03/2011');
INSERT INTO MonthEndings VALUES(201112, '27/11/2011', '26/01/2012');
You can then group accurately using:
SELECT M.MonthID, P.Id, P.Date, P.Amount
FROM Payments AS P
JOIN MonthEndings AS M ON P.Date BETWEEN M.StartDate and M.EndDate
ORDER BY M.MonthID, P.Date;
Any group headings etc are best handled out of the DBMS - the SQL gets you the data in the correct sequence, and the software retrieving the data presents it to the user.
If you can't translate SQL to LINQ, that makes two of us. Sorry, I have never used LINQ, so I've no idea what is involved.
SELECT *, CASE WHEN datepart(day,date)<27 THEN datepart(month,date)
ELSE datepart(month,date) % 12 + 1 END as group_name
FROM payment