How to automate a test in Oracle without using Java? - oracle

So I have a stored procedure that needs to be run to grab a daily sum of values (based off day) in the database. Is there an automated approach to do this comparison within Oracle? I was looking around online and couldn't find any examples...
The script I will run to execute the Stored Procedure is as follows:
execute sum_daily;
Some Sample Data is as follows:
Value_ID Value_Tx Value_Type
1 5 A
2 2 A
3 7 B
4 5 C
5 3 C
6 1 D
7 7 F
Expected Value (Inserted by the Stored Procedure):
Sum Value_Type
14 A | B
8 C
1 D
7 F
Is there a way I can test ('automatically' via a script (PL-SQL?)) whether or not the expected values match up with the results inserted by the stored procedure.
Thanks in advance!

Related

Run parallel jobs in oracle

I have a table jobs which is as follows:
JOB_ID STATUS
1 N
2 N
3 N
4 N
5 N
6 N
7 N
8 N
9 N
10 N
11 N
12 N
What I have to do is select 4 job ids at once,set their status to 'Y' and as soon as 1 job is completed another should start running. At any instance, there must be 4 jobs running.
I did research on how to achieve and most of the documents suggested using scheduler jobs. However, I could not figure out how to do it.
Here is a code sample of what I have done:
CREATE OR REPLACE PROCEDURE sp_processTask
(
JobID NUMBER
)
AS
vblSQL VARCHAR2(32767);
vJobID NUMBER;
BEGIN
vJobID:=JobID;
EXECUTE IMMEDIATE'insert into job_logs values('''||vJobID||''',sysdate,sysdate)';
vblSQL:='UPDATE jobs
SET status=''Y''
WHERE job_ID='||vJobID;
EXECUTE IMMEDIATE(vblSQL);
Dbms_Output.put_line(vblSQL);
END;
/
And then passing 4 different job ids at once as follows:
BEGIN
sp_processTask(1);
sp_processTask(2);
sp_processTask(3);
sp_processTask(4);
END;
What should I do to pass another job id as soon as one flag is set to 'Y'? I am using oracle.

Multiple tablix for report

i have the following scenario i am trying to simplify using Visual Studio and Microsoft BI Stack (SSRS).
I have a Dataset as follows.
Year Employee Sales
1 A 5,000,000
2 A 7,500,000
3 A 6,500,000
1 B 3,500,000
2 B 5,000,000
3 B 8,000,000
1 C 5,750,000
2 C 7,500,000
3 C 6,500,000
1 D 4,500,000
2 D 5,500,000
3 D 6,100,000
I am trying to create a report whereby, a single report would span 3 Tablix inside report body when run to display as follows.
Year 1
Year Employee Sales
1 A 5,000,000
1 B 3,500,000
1 C 5,750,000
1 D 4,500,000
Year 2
Year Employee Sales
2 A 7,500,000
2 B 5,000,000
2 C 7,500,000
2 D 5,500,000
Year 3
Year Employee Sales
3 A 6,500,000
3 B 8,000,000
3 C 6,500,000
3 D 6,100,000
Now i know i could replicate the Tablix 3 times in the report body, but that would be a bad idea when it comes to maintaining this report.
In the effort to avoid repeating myself, is there a way to loop the Tablix (N Times) in SSRS where condition would be the Year column (values 1, 2, 3.....)?
Following are the steps :-
Insert a List. Group it by the Year.
Put your tablix inside the list.
This should do it.
What you want to do is to NEST two tables.
First, create your table that gives you all the results like you want.
Then create a second table with one column and one row and Group it by your Year field.
Place your first table in the second table's so that it repeats for each year.

What if the value of order field is the same for all the records [duplicate]

This question already has answers here:
Why does Oracle return specific sequence if 'orderby' values are identical?
(4 answers)
Closed 7 years ago.
All, Let's say the SQL looks like below.
Select a, b ,c from table1 order by c
If all the rows in table1 have the same field value in the field c. I want to know if the result has the same order for each time I executed the SQL.
Let's say data in the table1 looks like below.
a b c
-------------------------------------------
1 x1 2014-4-1
....
100 x100 2014-4-1
....
1000 x1000 2014-4-1
....
How Oracle determine the rows sequence for the same order by value?
Added
Will they be random sequence for each time?
One simple answer is NO. There is no guarantee that the ORDER BY on equal values will return the same sorted result every time. It might seem to you it is always stable, however, there are many reasons when it could change.
For example, the sorting on equal values might defer after:
Gathering statistics
Adding an index on the column
For example,
Let's say I have a table t:
SQL> SELECT * FROM t ORDER BY b;
A B
---------- ----------
1 1
2 1
3 2
4 2
5 3
6 3
6 rows selected.
The sorting on the column having similar values is just like:
SQL> CREATE TABLE t1 AS SELECT * FROM t ORDER BY b, DBMS_RANDOM.VALUE;
Table created.
SQL> SELECT * FROM t1 ORDER BY b;
A B
---------- ----------
1 1
2 1
4 2
3 2
5 3
6 3
6 rows selected.
So, similar data in bot the tables, however, ORDER BY on the column having equal values, dos not guarantee the same sorting.
They must not be random (change each time), but the order is not guaranteed (change sometimes).

Transformation required to load data from multiple rows to one column

What are the transformation required to load data from multiple rows of source to one column of target in below scenario.
source
Dept_No Name
10 A
11 B
10 C
10 D
12 E
12 F
Target
Dept_No Name
10 A, C, D
11 B
12 E,F
The solution is to use expression transformation and implementing kind of loop inside the transformation,comparing current row value and previous row values. Please,check detailed answer here: http://data-point.blogspot.co.il/2014/10/loading-data-from-multiple-rows-to-one_19.html

One Select, Two Resultsets: Returning Summary and Detail based on same query

I'm attempting to perform a large query where I want to:
Query the detail rows, then
Perform aggregations based on the results returned
Essentially, I want to perform my data-intensive query ONCE, and derive both summary and detail values from the one query, as the query is pretty intensive. I'm SURE there is a better way to do this using the frontend application (e.g. detail rows in the SQL, aggregate in front-end?), but I want to know how to do this all in PL/SQL using essentially one select against the db (for performance reasons, I don't want to call essentially the same large Select twice)(and at this point, my reasons for wanting to do it in one query might be called stubborn... i.e. even if there's a better way, I'd like to know if it can be done).
I know how to get the basic "detail-level" resultset. That query would return data such as:
UPC-Region-ProjectType-TotalAssignments-IncompleteAssignments
So say I have 10 records:
10-A-X-20-10
11-B-X-10-5
12-C-Y-30-15
13-C-Z-20-10
14-A-Y-10-5
15-B-X-30-15
16-C-Z-20-10
17-B-Y-10-5
18-C-Z-30-15
19-A-X-20-10
20-B-X-10-5
I want to be able to perform the query, then perform aggregations on that resultset, such as:
Region A Projects: 3
Region A Total Assign: 50
Region A Incompl Assign: 25
Region B...
Region C...
Project Type X Projects: 5
Project Type X Total Assign: 90
Project Type X Incompl Assign: 45
Project Type Y...
Project Type Z...
And then return both resultsets (Summary + Detail) to the calling application.
I guess the idea would be running the Details query into a Temp Table, and then selecting/performing aggregation on it there to build the second "summary level" query. then passing the two resultsets back as two refcursors.
But I'm open to ideas...
My initial attempts have been:
type rec_projects is record
(record matching my DetailsSQL)
/* record variable */
project_resultset rec_projects;
/* cursor variable */
OPEN cursorvar1 FOR
select
upc,
region,
project_type,
tot_assigns,
incompl_assigns
...
Then I:
loop
fetch cursorvar1 into project_resultset;
exit when cursorvar1%NOTFOUND;
/* perform row-by-row aggregations into variables */
If project_resultset.region = 'A'
then
numAProj := numAProj + 1;
numATotalAssign := numATotalAssign + project_resultset.Totassigns;
numAIncomplAssign := numAIncomplAssign + project_resultset.Incomplassigns;
and so on...
end loop;
Followed by opening another refcursor var - selecting the variables from DUAL:
open cursorvar2 for
select
numAProj, numATotalAssign, numAIncomplAssign, etc, etc from dual;
Lastly:
cur_out1 := cursorvar1;
cur_out2 := cursorvar2;
not working... cursorvar1 seems to load fine, and I get into the loop. But I'm not ending up with anything in cursorvar2, and just feel I'm probably totally on the wrong path here (that there is a better way to do it)
Thanks for your help.
I prefer doing all calculations on server side.
Both types of information (detail + master) can be fetched through single cursor:
with
DET as (
-- your details subquery here
select
UPC,
Region,
Project_Type,
Total_Assignments,
Incomplete_Assignments
from ...
)
select
UPC,
Region,
Project_Type,
Total_Assignments,
Incomplete_Assignments,
null as projects_ctr
from DET
union all
select
null as UPC,
Region,
null as Project_Type,
sum(Total_Assignments) as Total_Assignments,
sum(Incomplete_Assignments) as Incomplete_Assignments,
count(0) as projects_ctr
from DET
group by Region
union all
select
null as UPC,
null as Region,
Project_Type,
sum(Total_Assignments) as Total_Assignments,
sum(Incomplete_Assignments) as Incomplete_Assignments,
count(0) as projects_ctr
from DET
group by Project_Type
order by UPC nulls first, Region, Project_Type
Result:
UPC Region Project_Type Total_Assignments Incomplete_Assignments Projects_Ctr
------ ------ ------------ ----------------- ---------------------- ------------
(null) A (null) 50 25 3
(null) B (null) 60 30 4
(null) C (null) 100 50 4
(null) (null) X 90 45 5
(null) (null) Y 50 25 3
(null) (null) Z 70 35 3
10 A X 20 10 (null)
11 B X 10 5 (null)
12 C Y 30 15 (null)
13 C Z 20 10 (null)
14 A Y 10 5 (null)
15 B X 30 15 (null)
16 C Z 20 10 (null)
17 B Y 10 5 (null)
18 C Z 30 15 (null)
19 A X 20 10 (null)
20 B X 10 5 (null)
fiddle
If you are going to be creating these reports regularly, it might be better to create a global temporary table to store the results of your initial query:
CREATE GLOBAL TEMPORARY TABLE MY_TEMP_TABLE
ON COMMIT DELETE ROWS
AS
SELECT
UPC,
Region,
ProjectType,
TotalAssignments,
IncompleteAssignments
FROM WHEREVER
;
You can then run a series of follow-up queries to calculate the various statistics values for your report and output them in a format other than a large text table.

Resources