Oracle query performance issue on view - oracle

I am facing some performance issue for below ORACLE query:
select fms.*,v.ph_typ_strng_id,v.ph_typ
from fct_miscdata fms -- (having 10 columns)
cross join lkp_lang LL -- (having 3 columns with 2 rows. we can't remove this cross join)
left outer join vw_dim_ph_typ v ON v.LANG_COD = LL.LANG_COD -- (PH_TYP_STRNG_ID, PH_TYP, Lang)
where v.lang_cod != '00_00' and fms.s_key = 0 and fms.t_key = 0
and v.ph_typ_strng_id = case
when exists (select 'x'
from vw_dim_excp_typ_inter m -- (misc_typSkey,src,tnt, misc_typId) -- only 3 miscTypeId : 0,98,99
where fms.in_misc_typ_skey = m.misc_typ_skey
and m.s_key = fms.s_key
and m.t_key = fms.t_key) then
case
when not exists (select 'x'
from vw_dim_excp_typ_inter m
where fms.out_misc_typ_skey = m.misc_typ_skey
and m.s_key = fms.s_key
and m.t_key = fms.t_key) then
0006
end
else 0004
end;
==> Its a materialized view in oracle which is refreshing in a Oracle procedure. Whenever I am trying to run this procedure, its taking long hours approx(1 hr) for run. Indexes are already created on tables.
--> Due to last code of lines CASE statement, its taking so much time to run. If I comment this part it'll run in few minutes. No. of records are 90772580.
--> Is there any alternate solution for this query to improve performance.
Please suggest how can I optimize/improve this query performance.
Thanks in advance

Related

Sequential vs parallel solution

I will try to present my problem as simplified as possible.
Assume that we have 3 tables in Oracle 11g.
Persons (person_id, name, surname, status, etc )
Actions (action_id, person_id, action_value, action_date, calculated_flag)
Calculations (calculation_id, person_id,computed_value,computed_date)
What I want is for each person that meets certain criteria (let's say status=3)
I should get the sum of action_values from the Actions table where calculated_flag=0. (something like this select sum(action_value) from Actions where calculated_flag=0 and person_id=current_id).
Then I shall use that sum in a some kind of formula and update the Calculations table for that specific person_id.
update Calculations set computed_value=newvalue, computed_date=sysdate
where person_id=current_id
After that calculated_flag for participated rows will be set to 1.
update Actions set calculated_flag=1
where calculated_flag=0 and person_id=current_id
Now this can be easily done sequentially, by creating a cursor that will run through Persons table and then execute each action needed for the specific person.
(I don't provide the code for the sequential solution as the above is just an example that resembles my real-world setup.)
The problem is that we are talking about quite big amount of data and sequential approach seems like a waste of computational time.
It seems to me that this task could be performed in parallel for number of person_ids.
So the question is:
Can this kind of task be performed using parallelization in PL/SQL?
What would the solution look like? That is, what special packages (e.g. DBMS_PARALLEL_EXECUTE), keywords (e.g. bulk collect), methods should be used and in what manner?
Also, should I have any concerns about partial failure of parallel updates?
Note that I am not quite familiar with parallel programming with PL/SQL.
Thanks.
Edit 1.
Here my pseudo code for my sequential solution
procedure sequential_solution is
cursor persons_of_interest is
select person_id from persons
where status = 3;
tempvalue number;
newvalue number;
begin
for person in persons_of_interest
loop
begin
savepoint personsp;
--step 1
select sum(action_value) into tempvalue
from actions
where calculated_flag = 0
and person_id = person.person_id;
newvalue := dosomemorecalculations(tempvalue);
--step 2
update calculations set computed_value = newvalue, computed_date = sysdate
where person_id = person.person_id;
--step 3
update actions set calculated_flag = 1;
where calculated_flag = 0 and person_id = person.person_id;
--step 4 (didn't mention this step before - sorry)
insert into actions
( person_id, action_value, action_date, calculated_flag )
values
( person.person_id, 100, sysdate, 0 );
exception
when others then
rollback to personsp;
-- this call is defined with pragma AUTONOMOUS_TRANSACTION:
log_failure(person_id);
end;
end loop;
end;
Now, how would I speed up the above either with forall and bulk colletct or with parallel programming Under the following constrains:
proper memory management (taking into consideration large amount of data)
For a single person if one part of the step sequence fails - all steps should be rolled back and the failure logged.
I can propose the following. Let's say you have 1 000 000 rows in persons table, and you want to process 10 000 persons per iteration. So you can do it in this way:
declare
id_from persons.person_id%type;
id_to persons.person_id%type;
calc_date date := sysdate;
begin
for i in 1 .. 100 loop
id_from := (i - 1) * 10000;
id_to := i * 10000;
-- Updating Calculations table, errors are logged into err$_calculations table
merge into Calculations c
using (select p.person_id, sum(action_value) newvalue
from Actions a join persons p on p.person_id = a.person_id
where a.calculated_flag = 0
and p.status = 3
and p.person_id between id_from and id_to
group by p.person_id) s
on (s.person_id = c.person_id)
when matched then update
set c.computed_value = s.newvalue,
c.computed_date = calc_date
log errors into err$_calculations reject limit unlimited;
-- updating actions table only for those person_id which had no errors:
merge into actions a
using (select distinct p.person_id
from persons p join Calculations c on p.person_id = c.person_id
where c.computed_date = calc_date
and p.person_id between id_from and id_to)
on (c.person_id = p.person_id)
when matched then update
set a.calculated_flag = 1;
-- inserting list of persons for who calculations were successful
insert into actions (person_id, action_value, action_date, calculated_flag)
select distinct p.person_id, 100, calc_date, 0
from persons p join Calculations c on p.person_id = c.person_id
where c.computed_date = calc_date
and p.person_id between id_from and id_to;
commit;
end loop;
end;
How it works:
You split the data in persons table into chunks about 10000 rows (depends on gaps in numbers of ID's, max value of i * 10000 should be knowingly more than maximal person_id)
You make a calculation in the MERGE statement and update the Calculations table
LOG ERRORS clause prevents exceptions. If an error occurs, the row with the error will not be updated, but it will be inserted into a table for errors logging. The execution will not be interrupted. To create this table, execute:
begin
DBMS_ERRLOG.CREATE_ERROR_LOG('CALCULATIONS');
end;
The table err$_calculations will be created. More information about DBMS_ERRLOG package see in the documentation.
The second MERGE statement sets calculated_flag = 1 only for rows, where no errors occured. INSERT statement inserts the these rows into actions table. These rows could be found just with the select from Calculations table.
Also, I added variables id_from and id_to to calculate ID's range to update, and the variable calc_date to make sure that all rows updated in first MERGE statement could be found later by date.

PL SQL Query taking too long to execute

Person table have 2 Million rows and 3 Million rows in Finance and on HR schema respectively. Now I'm going to update the Person status on Finance schema with the person_status on the HR schema. Query is running on M3000 Server with 32 GB RAM with Solaris 10 and Oracle 11.2.2. It took 117 hours and still runnning. How can I optimize this query. I have created index on person_no.
DECLARE
CURSOR c IS
SELECT p1.person_no AS "PersonNo1"
, p2.person_no AS "PersonNo2"
, p2.person_status AS "P2_PERSON_STATUS"
FROM person p1
, hr.person p2
WHERE (lower(p1.person_no) = lower(p2.person_no)
OR substr(p1.person_no, instr(p1.person_no, '-') + 1, length(p1.person_no))
= substr(p2.person_no, instr(p2.person_no, '-') + 1, length(p2.person_no)));
BEGIN
FOR i IN c LOOP
UPDATE person
SET person_status_id = decode(lower(i.P2_PERSON_STATUS), 'y', 1, 'n', 2)
WHERE (lower(person_no) = lower(i.PersonNo2)
OR substr(person_no, instr(person_no, '-') + 1, length(person_no))
= substr(i.PersonNo2, instr(i.PersonNo2, '-') + 1, length(i.PersonNo2)));
COMMIT;
END LOOP;
END;
/
There is several things you can do to optimize this. It would be helpful to see a query plan.
You can use a merge statement. Then you have only one statement and you can work optimizing this statement. Something like this
merge into person
using hr.person person_hr
on (person.person_no=person_hr.person_no)
when matched then
update set person_status_id=decode(lower(person_status),'y',1,'n',2);
You have to adjust the on part to match your where statement.
This might then be the source of best optimization. Maybe you have to create an index for lower and also for substr.
Something like
CREATE INDEX person_idx
ON person (lower(person_no))
Also of course for the substrings etc. Hope this helps.
In the code provided as question i can see there are two aspects.
1) Query Optimization
2) ROW BY ROW Update --> Never Recommended for such a huge records count..
Approach to Optmization.
1) Use Merge Statement as it will Bundle up the block into pure SQL thus will enhance your query.
2) Use Function based Index as i can see there are lot of functions like "LOWER" is used in the query "WHERE" conditions.
NOTE : [Over INDEXING may also cause deterioration in the performance}
3) Last but not the least if you have to go by Anonymous block only then avoid using ROW BY ROW Update. Try using Bulk collect Options as illustrated below.
Since i do not have workstation handy with me please bear with any Syntax Errors.
Let me know if this helps.
DECLARE
TYPE p1
IS
TABLE OF <table_name>.<COLUMN_NAME>%TYPE;
PersonNo1 p1;
TYPE p2
IS
TABLE OF <table_name>.<COLUMN_NAME>%TYPE;
PersonNo2 p2;
TYPE status
IS
TABLE OF <table_name>.<COLUMN_NAME>%TYPE;
P2_PERSON_STATUS status;
BEGIN
SELECT p1.person_no AS "PersonNo1" ,
p2.person_no AS "PersonNo2" ,
p2.person_status AS "P2_PERSON_STATUS" BULK COLLECT
INTO PersonNo1,
PersonNo2,
P2_PERSON_STATUS
FROM person p1 ,
hr.person p2
WHERE (lower(p1.person_no) = lower(p2.person_no)
OR SUBSTR(p1.person_no, instr(p1.person_no, '-') + 1, LENGTH(p1.person_no)) = SUBSTR(p2.person_no, instr(p2.person_no, '-') + 1, LENGTH(p2.person_no)));
FORALL I IN PersonNo1.FIRST..PersonNo1.LAST
UPDATE person
SET person_status_id = DECODE(lower(P2_PERSON_STATUS(I)), 'y', 1, 'n', 2)
WHERE (lower(person_no) = lower(PersonNo2(I))
OR SUBSTR(person_no, instr(person_no, '-') + 1, LENGTH(person_no)) = SUBSTR(PersonNo2(I), instr(PersonNo2(I), '-') + 1, LENGTH(PersonNo2(I))));
COMMIT;
END;

Building a PL/SQL coverage report with DBMS Profiler

I am using DBMS_PROFILER for basic profiling of my PL/SQL packages. I am also using it to get code coverage statistics using the following query:
SELECT EXEC.unit_name unitname,ROUND (EXEC.cnt/total.cnt * 100, 1) Code_coverage FROM
(SELECT u.unit_name, COUNT(1) cnt FROM plsql_profiler_data d, plsql_profiler_units u WHERE u.unit_number = d.unit_number GROUP BY u.unit_name) total,
(SELECT u.unit_name, COUNT(1) cnt FROM plsql_profiler_data d, plsql_profiler_units u WHERE u.unit_number = d.unit_number AND d.total_occur > 0 GROUP BY u.unit_name) EXEC
WHERE EXEC.unit_name = total.unit_name
I clear the plsql_profiler_data,plsql_profiler_units,plsql_profiler_runs tables before each profiler runs so that I need not know the run id each time.
This will give me Package wise information on the percentage of code that was covered during the profiling. Now I am trying to see if this can be built as a normal coverage report where I can know which line of code was covered and which one wasnt(say select lineOfCode, iscovered from...) so that I can built a report with html formatting to indicate if a line was covered or not.
I am not too proficient in Oracle table structures on where the functions and procedures get saved etc. (Got the above query from a blog and modified slightly to remove run id's)
Is this possible?
If so how can I achieve this?
I think this approaches what you're after:
-- View lines of code profiled, along with run times, next to the complete, ordered source..
-- Provides an annotated view of profiled packages, procs, etc.
-- Only the first line of a multiline SQL statement will register with timings.
SELECT u.UNIT_OWNER || '.' || u.UNIT_NAME AS "Unit"
, s.line
, CASE WHEN d.TOTAL_OCCUR >= 0 THEN 'C'
ELSE ' ' END AS Covered
, s.TEXT
, TO_CHAR(d.TOTAL_TIME / (1000*1000*1000), 'fm990.000009') AS "Total Time (sec)"
, CASE WHEN NVL(d.TOTAL_OCCUR, 1) > 0 THEN d.TOTAL_OCCUR ELSE 1 END AS "# Iterations"
, TO_CHAR(CASE WHEN d.TOTAL_OCCUR > 0 THEN d.TOTAL_TIME / (d.TOTAL_OCCUR * (1000*1000*1000))
ELSE NULL END, 'fm990.000009') AS "Avg Time (sec)"
FROM all_source s
LEFT JOIN plsql_profiler_units u ON s.OWNER = u.UNIT_OWNER
AND s.NAME = u.UNIT_NAME
AND s.TYPE = u.UNIT_TYPE
LEFT JOIN plsql_profiler_data d ON u.UNIT_NUMBER = d.UNIT_NUMBER
AND s.LINE = d.LINE#
AND d.RUNID = u.RUNID
WHERE u.RUNID = ? -- Add RUNID of profiler run to investigate here
ORDER BY u.UNIT_NAME
, s.LINE
There are few issues to keep in mind.
1) Many rows in the plsql_profiler_data table will NOT have accurate values in their TOTAL_TIME column because they executed faster than the resolution of the timer.
Ask Tom re: timings:
The timings are collected using some unit of time, typically only
granular to the HSECS.
That means that many discrete events that take less then 1/100th of a
second, appear to take ZERO seconds.
Many discrete events that take less then 1/100ths of a second may
appear to take 1/100th of a second.
2) Only the FIRST line in a multiline statement will show as covered. So if you split an INSERT or whatever across multiple lines, I don't know of any easy way to have every line of that statement to show as profiled in an Annotated Source style of report.
Also, check out Oracle's dbms_profiler documentation and this useful package reference for help crafting queries against the collected profiler data.
Actually there are some tools for PL/SQL that do code coverage. See the answers to this question for more information.
Said this, you can find information on user created data structure and code in following tables:
user_source: here you can find the source in the TEXT field typified by function, procedure, package, etc.
User_tables
user_indexes
user_types: if you use some kind of OO code.
Other tables beginning with user_ that you may need.
Basically you would need to check the result of your query against user_source and get extra information from the other tables.

optimizing a dup delete statement Oracle

I have 2 delete statements that are taking a long time to complete. There are several indexes on the columns in where clause.
What is a duplicate?
If 2 or more records have same values in columns id,cid,type,trefid,ordrefid,amount and paydt then there are duplicates.
The DELETEs delete about 1 million record.
Can they be re-written in any way to make it quicker.
DELETE FROM TABLE1 A WHERE loaddt < (
SELECT max(loaddt) FROM TABLE1 B
WHERE
a.id=b.id and
a.cid=b.cid and
NVL(a.type,'-99999') = NVL(b.type,'-99999') and
NVL(a.trefid,'-99999')=NVL(b.trefid,'-99999') and
NVL(a.ordrefid,'-99999')= NVL(b.ordrefid,'-99999') and
NVL(a.amount,'-99999')=NVL(b.amount,'-99999') and
NVL(a.paydt,TO_DATE('9999-12-31','YYYY-MM-DD'))=NVL(b.paydt,TO_DATE('9999-12-31','YYYY-MM-DD'))
);
COMMIT;
DELETE FROM TABLE1 a where rowid > (
Select min(rowid) from TABLE1 b
WHERE
a.id=b.id and
a.cid=b.cid and
NVL(a.type,'-99999') = NVL(b.type,'-99999') and
NVL(a.trefid,'-99999')=NVL(b.trefid,'-99999') and
NVL(a.ordrefid,'-99999')= NVL(b.ordrefid,'-99999') and
NVL(a.amount,'-99999')=NVL(b.amount,'-99999') and
NVL(a.paydt,TO_DATE('9999-12-31','YYYY-MM-DD'))=NVL(b.paydt,TO_DATE('9999-12-31','YYYY-MM-DD'))
);
commit;
Explain Plan:
DELETE TABLE1
HASH JOIN 1296491
Access Predicates
AND
A.ID=ITEM_1
A.CID=ITEM_2
ITEM_3=NVL(TYPE,'-99999')
ITEM_4=NVL(TREFID,'-99999')
ITEM_5=NVL(ORDREFID,'-99999')
ITEM_6=NVL(AMOUNT,(-99999))
ITEM_7=NVL(PAYDT,TO_DATE(' 9999-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
Filter Predicates
LOADDT<MAX(LOADDT)
TABLE ACCESS TABLE1 FULL 267904
VIEW VW_SQ_1 690385
SORT GROUP BY 690385
TABLE ACCESS TABLE1 FULL 267904
How large is the table? If count of deleted rows is up to 12% then you may think about index.
Could you somehow partition your table - like week by week and then scan only actual week?
Maybe this could be more effecient. When you're using aggregate function, then oracle must walk through all relevant rows (in your case fullscan), but when you use exists it stops when the first occurence is found. (and of course the query would be much faster, when there was one function-based(because of NVL) index on all columns in where clause)
DELETE FROM TABLE1 A
WHERE exists (
SELECT 1
FROM TABLE1 B
WHERE
A.loaddt != b.loaddt
a.id=b.id and
a.cid=b.cid and
NVL(a.type,'-99999') = NVL(b.type,'-99999') and
NVL(a.trefid,'-99999')=NVL(b.trefid,'-99999') and
NVL(a.ordrefid,'-99999')= NVL(b.ordrefid,'-99999') and
NVL(a.amount,'-99999')=NVL(b.amount,'-99999') and
NVL(a.paydt,TO_DATE('9999-12-31','YYYY-MM-DD'))=NVL(b.paydt,TO_DATE('9999-12-31','YYYY-MM-DD'))
);
Although some may disagree, I am a proponent of running large, long running deletes procedurally. In my view it is much easier to control and track progress (and your DBA will like you better ;-) Also, not sure why you need to join table1 to itself to identify duplicates (and I'd be curious if you ever run into snapshot too old issues with your current approach). You also shouldn't need multiple delete statements, all duplicates should be handled in one process. Finally, you should check WHY you're constantly re-introducing duplicates each week, and perhaps change the load process (maybe doing a merge/upsert rather than all inserts).
That said, you might try something like:
-- first create mat view to find all duplicates
create materialized view my_dups_mv
tablespace my_tablespace
build immediate
refresh complete on demand
as
select id,cid,type,trefid,ordrefid,amount,paydt, count(1) as cnt
from table1
group by id,cid,type,trefid,ordrefid,amount,paydt
having count(1) > 1;
-- dedup data (or put into procedure and schedule along with mat view refresh above)
declare
-- make sure my_dups_mv is refreshed first
cursor dup_cur is
select * from my_dups_mv;
type duprec_t is record(row_id rowid);
duprec duprec_t;
type duptab_t is table of duprec_t index by pls_integer;
duptab duptab_t;
l_ctr pls_integer := 0;
l_dupcnt pls_integer := 0;
begin
for rec in dup_cur
loop
l_ctr := l_ctr + 1;
-- assuming needed indexes exist
select rowid
bulk collect into duptab
from table1
where id = rec.id
and cid = rec.cid
and type = rec.type
and trefid = rec.trefid
and ordrefid = rec.ordrefid
and amount = rec.amount
and paydt = rec.paydt
-- order by whatever makes sense to make the "keeper" float to top
order by loaddt desc
;
for i in 2 .. duptab.count
loop
l_dupcnt := l_dupcnt + 1;
delete from table1 where rowid = duptab(i).row_id;
end loop;
if (mod(l_ctr, 10000) = 0) then
-- log to log table here (calling autonomous procedure you'll need to implement)
insert_logtable('Table1 deletes', 'Commit reached, deleted ' || l_dupcnt || ' rows');
commit;
end if;
end loop;
commit;
end;
Check your log table for progress status.
1. Parallel
alter session enable parallel dml;
DELETE /*+ PARALLEL */ FROM TABLE1 A WHERE loaddt < (
...
Assuming you have Enterprise Edition, a sane server configuration, and you are on 11g. If you're not on 11g, the parallel syntax is slightly different.
2. Reduce memory requirements
The plan shows a hash join, which is probably a good thing. But without any useful filters, Oracle has to hash the entire table. (Tbone's query, that only use a GROUP BY, looks nicer and may run faster. But it will also probably run into the same problem trying to sort or hash the entire table.)
If the hash can't fit in memory it must be written to disk, which can be very slow. Since you run this query every week, only one of the tables needs to look at all the rows. Depending on exactly when it runs, you can add something like this to the end of the query: ) where b.loaddt >= sysdate - 14. This may significantly reduce the amount of writing to temporary tablespace. And it may also reduce read IO if you use some partitioning strategy like jakub.petr suggested.
3. Active Report
If you want to know exactly what your query is doing, run the Active Report:
select dbms_sqltune.report_sql_monitor(sql_id => 'YOUR_SQL_ID_HERE', type => 'active')
from dual;
(Save the output to an .html file and open it with a browser.)

Sql stored prcedure take's more time to execute whn records are getting increased is there any way to optimize it

I have 6,00,000 records and i want to fetch 10 records from them as i want to display only 10 records in the grid my stored procedure is working properly when i m fetching records between 1-10000 E.G (500-510) after that the execution time is increased when the row number is increased E.G if i fetch record b/w 1,00,000-1,00,010 it takes more time to execute
can any one please help me i have used ROW_NUMBER() to get the number row number and used between to retrieve data.
please give a optimized way to get records
The stored procedure creats a sql query as given below
I have 6,00,000 records and i want to fetch 10 records from them as i want to display only 10 records in the grid my stored procedure is working properly when i m fetching records between 1-10000 E.G (500-510) after that the execution time is increased when the row number is increased E.G if i fetch record b/w 1,00,000-1,00,010 it takes more time to execute
can any one please help me i have used ROW_NUMBER() to get the number row number and used between to retrieve data.
please give a optimized way to get records
The stored procedure create a sql query as given below
SELECT FuelClaimId from
( SELECT fc.FuelClaimId,ROW_NUMBER() OVER ( order by fc.FuelClaimId ) AS RowNum
from FuelClaims fc
INNER JOIN Vehicles v on fc.VehicleId =v.VehicleId
INNER JOIN Drivers d on d.DriverId =v.OfficialID
INNER JOIN Departments de on de.DepartmentId =d.DepartmentId
INNER JOIN Provinces p on de.ProvinceId =p.ProvinceId
INNER JOIN FuelRates f on f.FuelRateId =fc.FuelRateId
INNER JOIN FuelClaimStatuses fs on fs.FuelClaimStatusId= fc.statusid
INNER JOIN LogsheetMonths l on l.LogsheetMonthId =f.LogsheetMonthId
Where fc.IsDeleted = 0) AS MyDerivedTable WHERE MyDerivedTable.RowNum BETWEEN
600000 And 600010
Try this instead:
SELECT TOP 10 fc.FuelClaimId
FROM FuelClaims fc
INNER JOIN Vehicles v ON fc.VehicleId = v.VehicleId
INNER JOIN Drivers d ON d.DriverId = v.OfficialID
INNER JOIN Departments de ON de.DepartmentId = d.DepartmentId
INNER JOIN Provinces p ON de.ProvinceId = p.ProvinceId
INNER JOIN FuelRates f ON f.FuelRateId = fc.FuelRateId
INNER JOIN FuelClaimStatuses fs ON fs.FuelClaimStatusId = fc.statusid
INNER JOIN LogsheetMonths l ON l.LogsheetMonthId = f.LogsheetMonthId
WHERE fc.IsDeleted = 0 AND fc.FuelClaimId BETWEEN 600001 AND 600010
ORDER BY fc.FuelClaimId
Also BETWEEN is inclusive so BETWEEN 10 and 20 actually returns 10,11,12,13,14,15,16,17,18,19 and 20 so 11 rows not 10. As identity values usually start at 1 you really want BETWEEN 11 AND 20 (hence 600001 in the above)
The above query should fix your issue where your performance degrades as you query the larger range of items.
While it won't always return 10 records the fix for that is:
WHERE fc.IsDeleted = 0 AND fc.FuelClaimId > #LastMaxFuelClaimId
Where #LastMaxFuelClaimId is the previous MAX FuelClaimId you had returned from the previous query execution.
Edit: The reason why it keeps getting slower is because it has to read more and more of the table to read the next chunk, it doesn't skip reading the first 600,000 records it reads them all and then only returns the next 10 hence each time you query it reads all the previous records all over again, the above does not suffer from the same problem.
You should post an execution plan but a probable cause of performance problems would be inadequate or lack of indexing.
Make sure you have
an index on all your foreign key relations
a covering index on the fields you retrieve and select from
Covering Index
CREATE INDEX IX_FUELCLAIMS_FUELCLAIMID_ISDELETED
ON dbo.FuelClaims (FuelClaimId, VehicleID, IsDeleted)

Resources