In a table, I have an index on column ID (VARCHAR).
Given that an index exists, I would expect
Select Top 1 * from table where ID like 'abc%' order by ID desc
to be very fast.
I then did a
set statistics io on
to compare and found that
Select Top 1 * from table where ID like 'abc___' order by ID desc
is also very fast (same number of reads, 5 in my case). The actual execution plan also shows excactly the same, index seek and key lookup in both cases.
From my understanding, it should not be able to use the index and hence have significantly more reads.
With the index being ordered, with the 'abc%' it should just be able to jump to the end of the index where there is a match starting with abc. But with me asking for a specific length with 'abc___' it should not be able to directly jump to some place in the index but should have to scan all entries starting with 'abc' for their length.
There are several 1000 entries in that format in the table.
Why is querying for 'abc___' as fast as querying for 'abc%'?
To replicate this, I set up a simple reproduction of your environment:
IF OBJECT_ID(N'tempdb..#T', 'U') IS NOT NULL DROP TABLE #T;
CREATE TABLE #T (ID VARCHAR(8) NOT NULL PRIMARY KEY);
INSERT #T (ID)
SELECT TOP 100000 123000 + CONVERT(VARCHAR(6), ROW_NUMBER() OVER(ORDER BY a.object_id))
FROM sys.all_objects a, sys.all_objects b
UNION ALL
SELECT TOP 100000 12300000 + CONVERT(VARCHAR(6), ROW_NUMBER() OVER(ORDER BY a.object_id))
FROM sys.all_objects a, sys.all_objects b;
SELECT COUNT(*)
FROM #T
WHERE ID LIKE '123%';
SELECT COUNT(*)
FROM #T
WHERE ID LIKE '123___';
When viewing the exection plan, as you hvae said, both use the same index seek:
and if you inspect the plan further, you can see both start off with the same range seek:
<SeekPredicateNew>
<SeekKeys>
<StartRange ScanType="GE">
<RangeColumns>
<ColumnReference Database="[tempdb]" Schema="[dbo]" Table="[#T]" Column="ID" />
</RangeColumns>
<RangeExpressions>
<ScalarOperator ScalarString="'123'">
<Const ConstValue="'123'" />
</ScalarOperator>
</RangeExpressions>
</StartRange>
<EndRange ScanType="LT">
<RangeColumns>
<ColumnReference Database="[tempdb]" Schema="[dbo]" Table="[#T]" Column="ID" />
</RangeColumns>
<RangeExpressions>
<ScalarOperator ScalarString="'124'">
<Const ConstValue="'124'" />
</ScalarOperator>
</RangeExpressions>
</EndRange>
</SeekKeys>
</SeekPredicateNew>
The only difference behind the scenes of both seeks is the second, where one further filters by LIKE '123%' and the other LIKE '123___'.
I think your misunderstanding may be that the index seek has to correctly identify the records in one go. This is not the case, in the case of the second query, the index seek will find the correct range, and then a further filter will identify the relevant rows from this range.
Why do you think 'abc%' it should just be able to jump to the end of the index.
The index is indeed ordered but the engine still has to inspect the individual values to find the first value that does not match 'abc%'. It does not know what that value is until it finds it. The next value could be 'abd', 'abf', 'ax'.
Related
I have question about "Subquery in Order by clause". The below request returns the error. Is it means that Subquery in Order by clause must be scalar?
select *
from employees
order by (select * from employees where first_name ='Steven' and last_name='King');
Error:
ORA-00913: too many values
00913. 00000 - "too many values"
Yes, it means that if you use a subquery in ORDER BY it must be scalar.
With select * your subquery returns multiple columns and the DBMS would not know which of these to use for the sorting. And if you selected one column only, you would still have to make sure you only select one row of course. (The difference is that Oracle sees the too-many-columns problem immediately, but detect too many rows only when fetching the data.)
This would be allowed:
select * from employees
order by (select birthdate from employees where employee_id = 12345);
This is a scalar query, because it returns only one value (one column, one row). But of course this still makes as little sense as your original query, because the subquery result is independent from the main query, i.e. it returns the same value for every row in the table and thus no sorting takes effect.
A last remark: A subquery in ORDER BY makes very seldomly sense, because that would mean you order by something you don't display. The exception is when looking up a sortkey. E.g.:
select *
from products p
where type = 'shirt' and color = 'blue' and size in ('S', 'M', 'L', 'XL')
order by (select sortkey from sizes s where s.size = p.size);
It means that valid options for ORDER BY clause can be
expression,
position or
column alias
A subquery is neither of these.
I don't often use ORACLE PL/SQL by the way but i need to understand what if anything in this function created by someone else
in the company before me is wrong as for it is not returning the latest record i've been told. I found out in some other forum issues that they
suggested to use the max(dateColumn) instead of "row_numer = 1" for example but not quite sure how to and where to incorporate that.
-- Knowing that --
We use Oracle version 12,
CustomObjectTypeA is an custom Oracle OBJECT TYPE defined by some old employee not longer in here,
V_OtherView is of Table_Mnd type beeing defined by some old employee not longer in here,
V_ABC_123 is a view created by some old employee not longer in here as well.
CREATE OR REPLACE FUNCTION F_TABLE_APPROVED (NUMBER_F_UPD number, NUMBER_F_GET VARcHAR2)
RETURN Table_Mnd
IS
V_OtherView Table_Mnd
BEGIN
SELECT CustomObjectTypeA (FromT.NUMBER_F,
FromT.OP_CODE,
FromT.CATG_CODE,
FromT.CATG_NAME,
FromT.CATG_SORT,
FromT.ORG_CODE,
FromT.ORG_NAME
FromT.DATA_ENTRY_VALID,
FromT.NUMBER_RECEIVED,
FromT.YEAR_1,
FromT.YEAR_2)
BULK COLLECT INTO V_OtherView
FROM (SELECT NUMBER_F,
OP_CODE,
CATG_CODE,
CATG_NAME,
CATG_SORT,
ORG_CODE,
ORG_NAME
DATA_ENTRY_VALID,
NUMBER_RECEIVED,
YEAR_1,
YEAR_2,
ROW_NUMBER() OVER (PARTITION BY BY ORG_CODE ORDER BY NUMBER_RECEIVED DESC, LOAD_DATE DESC) AS ROW_NUMBER
FROM V_ABC_123
WHERE NUMBER_F = NUMBER_F_UPD AND DATA_ENTRY_VALID <> 'OnGoing'
AND LOAD_DATE >= (SELECT sysdate-10 FROM dual)
AND LOAD_DATE <= (SELECT DISTINCT LOAD_DATE
FROM V_ABC_123
WHERE NUMBER_RECEIVED = NUMBER_F_GET)) FromT
WHERE FromT.ROW_NUMBER=1;
RETURN V_OtherView;
END F_TABLE_APPROVED;
The important bits of the query are:
SELECT ...
FROM (select ...,
ROW_NUMBER()
OVER (PARTITION BY ORG_CODE
ORDER BY NUMBER_RECEIVED DESC,
LOAD_DATE DESC) AS ROW_NUMBER
...) FromT
WHERE FromT.ROW_NUMBER = 1;
The "ROW_NUMBER" column is computed according to the following window clause:
PARTITION BY ORG_CODE
ORDER BY NUMBER_RECEIVED DESC, LOAD_DATE DESC
Which means that for each ORG_CODE, it will sort all the records by NUMBER_RECEVED,LOAD_DATE in descending order. Note that if the columns are Oracle DATEs, they will only be accurate to the nearest second; so if there are multiple records with date/times in the exact same 1-second interval, this sort order will not be guaranteed unique. The logic of ROW_NUMBER will therefore pick one of them arbitrarily (i.e. whichever record happens to be emitted first) and assign it the value "1", and this will be deemed the "latest". Subsequent executions of the same SQL could (in theory) return a different record.
The suspicious part is NUMBER_RECEIVED which sounds like it's a number, not a date? Sorting by this means that the records with the highest NUMBER_RECEIVED will be preferred. Was this intentional?
I'm not sure why the PARTITION is there, this would cause the query to return one "latest" record for each value of ORG_CODE that it finds. I can only assume this was intentional.
The problem is that the query can only determine the "latest record" as well as it can based on the data provided to it. In this case, it's possible the data is simply not granular enough to be able to decide which record is the actual "latest" record.
Is this possible?
I have a sample query:
select vehicle_name
from vehicles
where vehicle_name in ('TOYO', 'HOND');
I may have a lot vehicle_name in the IN operator clause. The result it returns is in the screenshot given below (screenshot 1).
What I want is in the second screenshot (screenshot 2) where first row should be HOND and second row should be TOYO. (based on alphabetical order) Third row should be HOND and fourth row should be TOYO. so on and so forth. In other words, two HOND or two TOYO should not come one after the other until the end of the result where no alternate vehicle_name is found.
Thanks,
You could use the analytic function ROW_NUMBER() to generate sequence numbers separately for each vehicle_name and use that for ordering. You don't need to add the function to SELECT - you can use it directly in ORDER BY.
SELECT vehicle_name
FROM vehicles
WHERE vehicle_name in ('HOND', 'TOYO')
ORDER BY row_number() over (partition by vehicle_name order by null), vehicle_name
;
I have a table with 3 columns:
table1: ID, CODE, RESULT, RESULT2, RESULT3
I have this SAS code:
data table1
set table1;
BY ID, CODE;
IF FIRST.CODE and RESULT='A' THEN OUTPUT;
ELSE IF LAST.CODE and RESULT NE 'A' THEN OUTPUT;
RUN;
So we are grouping the data by ID and CODE, and then writing to the dataset if certain conditions are met. I want to write a hive query to replicate this. This is what I have:
proc sql;
create table temp as
select *, row_number() over (partition by ID, CODE) as rowNum
from table1;
create table temp2 as
select a.ID, a.CODE, a.RESULT, a.RESULT2, a.RESULT3
from temp a
inner join (select ID, CODE, max(rowNum) as maxRowNum
from temp
group by ID, CODE) b
on a.ID=b.ID and a.CODE=b.CODE
where (a.rowNum=1 and a.RESULT='A') or (a.rowNum=b.maxRowNum and a.RESULT NE 'A');
quit;
There are two issues I see with this.
1) The row that is first or last in each BY group is entirely dependant on the order of rows in table1 in SAS, we aren't ordering by anything. I don't think row order is preserved when translating to a hive query.
2) The SAS code is taking the first row in each BY GROUP or the last, not both. I think that my HIVE query is taking both, resulting in more rows than I want.
Any suggestions or insight on how to improve my query is appreciated. Is it even possible to replicate this SAS code in HIVE?
The SAS code has a by statement (BY ID CODE;), which tells SAS that the set dataset is sorted at those levels. So, not a random selection for first. and last..
That said, we can replicate this in HIVE by using the first_value and last_value window functions.
FIRST.CODE should replicate to
first_value(code) over (partition by Id order by code)fcode
Similarly, LAST.CODE would be
last_value(code) over (partition by Id order by code)lcode
Once you have the fcode and lcode columns, use case when statements for the result column criteria. Like,
case when (code=fcode and result='A') or (code=lcode and result<>'A')
then 1 else 0 end as op_flag
Then the fetch the table with where op_flag = 1
SAMPLE
select id, code, result from (
select *,
first_value(code) over (partition by id order by code)fcode,
last_value(code) over (partition by id order by code)lcode
from footab) f
where (code=fcode and result='A') or (code=lcode and result<>'A')
Regarding point 1) the BY group processing requires the input data to be sorted or indexed on BY variables, so though the code contains no ordering, the source data is processed in order. If the input data was not indexed/sorted, SAS will throw error.
Regarding this, possible differences are on rows with same values of BY variables, especially if the RESULT is different.
In SAS, I would pre-sort data by ID, CODE, RESULT, then use BY ID CODE in order to not be influenced by order of rows.
Regarding 2) FIRST and LAST can be both true in SAS. Since your condition for first and last on RESULT is different, I guess this is not a source of differences.
I guess you could add another field as
row_number() over (partition by ID, CODE desc) as rowNumDesc
to detect last row with rowNumDesc = 1 (so that you skip the join).
EDIT:
I think the two programs above both include random selection of rows for groups with same values of ID and CODE variables, especially with same values of RESULT. But you should get same number of rows from both. If not, just debug it.
However the random aspect in SAS code/storage is based on physical order of rows, while the ROW_NUMBERs randomness within a group will be influenced by the implementation of the function in the engine.
For a pair of cursors where the total number of rows in the resultset is required immediately after the first FETCH, ( after some trial-and-error ) I came up with the query below
SELECT
col_a,
col_b,
col_c,
COUNT(*) OVER( PARTITION BY 1 ) AS rows_in_result
FROM
myTable JOIN theirTable ON
myTable.col_a = theirTable.col_z
GROUP BY
col_a, col_b, col_c
ORDER BY
col_b
Now when the output of the query is X rows, rows_in_result reflects this accurately.
What does PARTITION BY 1 mean?
I think it probably tells the database to partition the results into pieces of 1-row each
It is an unusual use of PARTITION BY. What it does is put everything into the same partition so that if the query returns 123 rows altogether, then the value of rows_in_result on each row will be 123 (as its alias implies).
It is therefore equivalent to the more concise:
COUNT(*) OVER ()
Databases are quite free to add restrictions to the OVER() clause. Sometimes, either PARTITION BY [...] and/or ORDER BY [...] are mandatory clauses, depending on the aggregate function. PARTITION BY 1 may just be a dummy clause used for syntax integrity. The following two are usually equivalent:
[aggregate function] OVER ()
[aggregate function] OVER (PARTITION BY 1)
Note, though, that Sybase SQL Anywhere and CUBRID interpret this 1 as being a column index reference, similar to what is possible in the ORDER BY [...] clause. This might appear to be a bit surprising as it imposes an evaluation order to the query's projection. In your case, this would then mean that the following are equivalent
COUNT(*) OVER (PARTITION BY 1)
COUNT(*) OVER (PARTITION BY col_a)
This curious deviation from other databases' interpretation allows for referencing more complex grouping expressions.