SQL Server performance using ROWCOUNT - performance

I use SET ROWCOUNT 27900 And then select two columns:
Select
emp.employeeid,
empd.employeedetailid
From
employee emp (NOLOCK)
join
employeedetail empd (NOLOCK) on emp.employeeid = empd.employeeid
This query executes in 3 sec
If I use SET ROWCOUNT 27950 then the same query takes 20 sec to execute.
I am not a sql DBA, why there is a difference of 17 sec for just 50 rows. Is this anything related to page size or index?
Can anyone help me to fine tune the query?

Have you tried doing this using TOP instead of SET ROWCOUNT, and adding an ORDER BY?
SELECT TOP 27900 emp.employeeid, ...
ORDER BY ...;
This will give the optimizer a much better chance at optimizing. In simplest terms, SET ROWCOUNT is applied after the entire query has been processed, and only affects the rows that are sent back to the caller...

Related

How to resolve temp table space issue in Oracle

I need help from DBAs here.
I have a query that fetches around 1800 records from DB.
However, it is observed that oracle's temp table space is getting filled up and makes oracle to respond too slow.
I have identified the query that is causing the issue and the query is something like this.
SELECT * FROM A a, B b WHERE a.id = b.fieldId AND b.col1 = :1 AND b.col2 = :2 ORDER BY TO_NUMBER(b.col3) ASC
This is query is returning around 1800 records and DBA segments show that 44 GB out of 50 GB of data is occupied.
I am not sure what could be solution for this.
I am using Oracle 12.1
Please look into this and suggest if i have to rewrite the query.
Thanks in Advance.
It is hard to how resource intensive query is without checking query plan at least.
This might be not your query at all who "ate" all the TEMP space. Here is how to get top 20 session with highest TEMP usage.
select round(u.blocks*8192/1024/1024,2) "TEMP usage, Mb",
s.sid, s.osuser, s.machine, s.module, s.action, s.status, s.event, s.LAST_CALL_ET, s.WAIT_TIME, s.sql_id, s.sql_child_number
from v$session s,
v$sort_usage u
where s.saddr = u.session_addr
order by u.blocks desc
fetch first 20 rows with ties

How to set range for limit clause in hive

How to set range for limit clause in hive , I have tried the below query but failed with syntax error . Can someone please help
select * from table limit 1000,2000;
You can use Row_Number window function and set the range limit.
Below Query will result only the first 20 records from the table
hive> select * from
(
SELECT *,ROW_NUMBER() over (Order by id) as rowid FROM <tab_name>
)t
where rowid > 0 and rowid <=20;
Using Between operator to specify range
hive> select * from
(
SELECT *,ROW_NUMBER() over (Order by id) as rowid FROM <tab_name>
)t
where rowid between 0 and 20;
To fetch rows from 20 to 40 then increase the lower/upper bound values
hive> select * from
(
SELECT *,ROW_NUMBER() over (Order by id) as rowid FROM <tab_name>
)t
where rowid > 20 and rowid <=40;
The LIMIT clause is used to set a ceiling on the number of rows in the result set. You are getting a syntax error because of an incorrect usage of this HQL clause.
The query could be written as the following to return no more than 2000 rows:
SELECT * FROM table LIMIT 2000;
You could also write it like so to return no more than 1000 rows:
SELECT * FROM table LIMIT 1000;
However you cannot combine both into the same argument for LIMIT. The LIMIT argument must evaluate to a constant value.
I will try and expand on this information a bit to try and help solve your problem. If you are attempting to "paginate" your results the following may be of use.
FIRST I would recommend against leaning on HQL for pagination, in most situations that would be more efficiently implemented on the application logic side (query large result set, cache what you need, paginate with application logic). If you have no choice but to pull out ranges of rows you can get the desired effect through a combination of the LIMIT, ORDER BY, and OFFSET clauses.
LIMIT : This will limit your result set to a maximum number of row
ORDER BY: This will sort/order your result set based on one or more columns
OFFSET: This will start your result set at a certain row after the logical first entry in the table.
You may combine these three clauses to effectively query "pages" of your table. For example the following three queries show how to get the first 3 blocks of data from a table where each block contains 1000 rows and the target table's 'column1' is used to determine logical order.
SELECT title as "Page 1", column1, column2, ... FROM table
ORDER BY column1 LIMIT 1000 OFFSET 0;
SELECT title as "Page 2", column1, column2, ... FROM table
ORDER BY column1 LIMIT 1000 OFFSET 1000;
SELECT title as "Page 3", column1, column2, ... FROM table
ORDER BY column1 LIMIT 1000 OFFSET 2000;
Each query declares 'column1' as the sorting value with ORDER BY. The queries will return no more than 1000 rows due to the LIMIT clause. Each result set will start at a different row due to the OFFSET being incremented by the "page size" for each query.
I am not sure what you are trying to achieve, but ...
That will return the 1001 and the 2001 record in the query results set only if you are using hive a hive version greater than 2.0.0
hive --version
(https://issues.apache.org/jira/browse/HIVE-11531)
Limit in Hive gives 'n' number of records randomly. It's not to print a range of records.
You may use order by in conjunction with limit to get what you want

Oracle SELECT * FROM LARGE_TABLE - takes minutes to respond

So I have a simple table with 5 or so columns, one of which is a clob containing some JSON data.
I am running
SELECT * FROM BIG_TABLE
SELECT * FROM BIG_TABLE WHERE ROWNUM < 2
SELECT * FROM BIG_TABLE WHERE ROWNUM = 1
SELECT * FROM BIG_TABLE WHERE ID=x
I expect that any fractionally intelligent relational database would return the data immediately. We are not imposing order by/group by clauses, so why not return the data as and when you find it?
Of all the forms of SELECT statements above, only 4. returned in a sub-second manner. This is unexpected for 1-3 which are returning between 1 and 10 minutes before the query shows any responses in SQL Developer. SQL Developer has the standard SQL Array Fetch Size of 50 (JDBC Fetch size of 50 rows) so at a minimum, it is taking 1-10 minutes to return 50 rows from a simple table with no joins on a super high-performance RAC cluster backed by fancy 4-tiered EMC disk subsystem.
Explain plans show a table scan. Fine, but why should I wait 1-10 minutes for the results with rownum in the WHERE clause?
What is going on here?
OK - I found the issue. ROWNUM does not operate like I thought it did and in the code above it never stops the full table scan.
This is because:
RowNum is assigned during the predicate operation (where clause evaluation) and incremented afterwards, i.e.: your row makes it into the result set and then gets rownum assigned.
In order to filter by rownum you need to already have it exist, something like ...
SELECT * FROM (SELECT * FROM BIG_TABLE) WHERE ROWNUM < 1
In effect what this means is that there is no way to filter out the top 5 rows from a table without having first selected the entire table if no other filter criteria are involved.
I solved my problem like this...
SELECT * FROM (SELECT * FROM BIG_TABLE WHERE
DATE_COL BETWEEN :Date1 AND :Date2) WHERE ROWNUM < :x;

Optimized Query Execution Time

My Query is
SELECT unnest(array [repgroupname,repgroupname||'-'
||masteritemname,repgroupname||'-' ||masteritemname||'-'||itemname]) AS grp
,unnest(array [repgroupname,masteritemname,itemname]) AS disp
,groupname1
,groupname2
,groupname3
,sum(qty) AS qty
,sum(freeqty) AS freeqty
,sum(altqty) AS altqty
,sum(discount) AS discount
,sum(amount) AS amount
,sum(stockvalue) AS stockvalue
,sum(itemprofit) AS itemprofit
FROM (
SELECT repgroupname
,masteritemname
,itemname
,groupname1
,groupname2
,groupname3
,units
,unit1
,unit2
,altunits
,altunit1
,altunit2
,sum(s2.totalqty) AS qty
,sum(s2.totalfreeqty) AS freeqty
,sum(s2.totalaltqty) AS altqty
,sum(s2.totaltradis + s2.totaladnldis) AS discount
,sum(amount) AS amount
,sum(itemstockvalue) AS stockvalue
,sum(itemprofit1) AS itemprofit
FROM sales1 s1
INNER JOIN sales2 s2 ON s1.txno = s2.txno
INNER JOIN items i ON i.itemno = s2.itemno
GROUP BY repgroupname
,masteritemname
,itemname
,groupname1
,groupname2
,groupname3
,units
,unit1
,unit2
,altunits
,altunit1
,altunit2
ORDER BY itemname
) AS tt
GROUP BY grp
,disp
,groupname1
,groupname2
,groupname3
Here
Sales1 table have 144513 Records
Sales2 Table have 438915 Records
items Table have 78512 Records
This Query take 6 seconds to produce result.
How to Optimize this query?
am using postgresql 9.3
That is a truly horrible query.
You should start by losing the ORDER BY in the sub-select - the ordering is discarded by the outer query.
Beyond that, ask yourself why you need to look to see a summary of every songle row in th DBMS - does this serve any useful purpose (if the query is returning more than 20 rows, then the answer is no).
You might be able to make it go faster by ensuring that the foreign keys in the tables are indexed (indexes are THE most important bit of information to look at whenever you're talking about performance and you've told us nothing about them).
Maintaining the query as a regular snapshot will mitigate the performance impact.

Sql stored prcedure take's more time to execute whn records are getting increased is there any way to optimize it

I have 6,00,000 records and i want to fetch 10 records from them as i want to display only 10 records in the grid my stored procedure is working properly when i m fetching records between 1-10000 E.G (500-510) after that the execution time is increased when the row number is increased E.G if i fetch record b/w 1,00,000-1,00,010 it takes more time to execute
can any one please help me i have used ROW_NUMBER() to get the number row number and used between to retrieve data.
please give a optimized way to get records
The stored procedure creats a sql query as given below
I have 6,00,000 records and i want to fetch 10 records from them as i want to display only 10 records in the grid my stored procedure is working properly when i m fetching records between 1-10000 E.G (500-510) after that the execution time is increased when the row number is increased E.G if i fetch record b/w 1,00,000-1,00,010 it takes more time to execute
can any one please help me i have used ROW_NUMBER() to get the number row number and used between to retrieve data.
please give a optimized way to get records
The stored procedure create a sql query as given below
SELECT FuelClaimId from
( SELECT fc.FuelClaimId,ROW_NUMBER() OVER ( order by fc.FuelClaimId ) AS RowNum
from FuelClaims fc
INNER JOIN Vehicles v on fc.VehicleId =v.VehicleId
INNER JOIN Drivers d on d.DriverId =v.OfficialID
INNER JOIN Departments de on de.DepartmentId =d.DepartmentId
INNER JOIN Provinces p on de.ProvinceId =p.ProvinceId
INNER JOIN FuelRates f on f.FuelRateId =fc.FuelRateId
INNER JOIN FuelClaimStatuses fs on fs.FuelClaimStatusId= fc.statusid
INNER JOIN LogsheetMonths l on l.LogsheetMonthId =f.LogsheetMonthId
Where fc.IsDeleted = 0) AS MyDerivedTable WHERE MyDerivedTable.RowNum BETWEEN
600000 And 600010
Try this instead:
SELECT TOP 10 fc.FuelClaimId
FROM FuelClaims fc
INNER JOIN Vehicles v ON fc.VehicleId = v.VehicleId
INNER JOIN Drivers d ON d.DriverId = v.OfficialID
INNER JOIN Departments de ON de.DepartmentId = d.DepartmentId
INNER JOIN Provinces p ON de.ProvinceId = p.ProvinceId
INNER JOIN FuelRates f ON f.FuelRateId = fc.FuelRateId
INNER JOIN FuelClaimStatuses fs ON fs.FuelClaimStatusId = fc.statusid
INNER JOIN LogsheetMonths l ON l.LogsheetMonthId = f.LogsheetMonthId
WHERE fc.IsDeleted = 0 AND fc.FuelClaimId BETWEEN 600001 AND 600010
ORDER BY fc.FuelClaimId
Also BETWEEN is inclusive so BETWEEN 10 and 20 actually returns 10,11,12,13,14,15,16,17,18,19 and 20 so 11 rows not 10. As identity values usually start at 1 you really want BETWEEN 11 AND 20 (hence 600001 in the above)
The above query should fix your issue where your performance degrades as you query the larger range of items.
While it won't always return 10 records the fix for that is:
WHERE fc.IsDeleted = 0 AND fc.FuelClaimId > #LastMaxFuelClaimId
Where #LastMaxFuelClaimId is the previous MAX FuelClaimId you had returned from the previous query execution.
Edit: The reason why it keeps getting slower is because it has to read more and more of the table to read the next chunk, it doesn't skip reading the first 600,000 records it reads them all and then only returns the next 10 hence each time you query it reads all the previous records all over again, the above does not suffer from the same problem.
You should post an execution plan but a probable cause of performance problems would be inadequate or lack of indexing.
Make sure you have
an index on all your foreign key relations
a covering index on the fields you retrieve and select from
Covering Index
CREATE INDEX IX_FUELCLAIMS_FUELCLAIMID_ISDELETED
ON dbo.FuelClaims (FuelClaimId, VehicleID, IsDeleted)

Resources