Environment:
Oracle 12.2 64-bit under Linux.
Job_queue_processes = 4000
Aq_tm_processes = 1
I wrote a package, that has a three procedures inside, say, MYPKG.
First procedure is for client requests from web application, say ProcWeb.
This procedure creates two jobs and waits for their finish inside loop.
Both jobs are disposable and will be stopped and dropped from MYPKG.ProcWeb after
usage. The procedures for these jobs are also inside package – ProcTop and ProcBottom.
This is how it’s declared inside MYPKG.ProcWeb:
l_jobtop := dbms_scheduler.generate_job_name('TOP_JOB_');
l_jobbottom := dbms_scheduler.generate_job_name('BOTTOM_JOB_');
dbms_scheduler.create_job(job_name => l_jobtop,
job_type => 'STORED_PROCEDURE',job_action => 'MYPKG.PROCTOP');
dbms_scheduler.create_job(job_name => l_jobbottom,
job_type => 'STORED_PROCEDURE',job_action => 'MYPKG.PROCBOTTOM');
…
dbms_scheduler.run_job(l_jobtop, use_current_session=>false);
dbms_scheduler.run_job(l_jobbottom, use_current_session=>false);
During the first ten days after the package was installed on the database everything was fine.
Then the weird things begin - one job is starting, but another – never, or with huge delay.
So I wrote a standalone procedures ProcTop and ProcBottom and re-declared creation of jobs:
dbms_scheduler.create_job(job_name => l_jobtop,
job_type => 'STORED_PROCEDURE',job_action => 'PROCTOP');
dbms_scheduler.create_job(job_name => l_jobbottom,
job_type => 'STORED_PROCEDURE',job_action => 'PROCBOTTOM');
It’s hard to explain… but observation shows that calling of standalone procedures instead of calling procedures from package is much more stable. Both jobs start with no problem.
What is the hidden problem to have executable block of job inside the package that creates the job?
Related
I have a problem with the laravel queue system. For performance purposes we use laravel queuing system with amazons SQS for heavier calculations. This works fine at least for most of our jobs. Some of them where the raw calculation time is about 25 seconds keep blocking the queue in the "Procesing" state for 6 minutes.
We did log the complete handle function of the job and the output was right at any time. As a matter of fact the last log statement (end of the function) was printed 20 seconds after entering the function. The data was calculated as expected and the database was up to date but the job was still "Processing".
After we intentionally crashed the job at the end of the handle function the calculated data was stored perfectly but obviously the queue crashed as well. So i guess it has to be something happing after the handle function. Maybe something with allocated memory?
The config of the queue is the default sqs driver configuration:
'sqs' => array(
'driver' => 'sqs',
'key' => env('AWS_KEY', 'secret'),
'secret' => env('AWS_SECRET', 'secret'),
'queue' => env('AWS_SQS_QUEUE', 'secret'),
'region' => env('AWS_SQS_REGION', 'secret'),
),
Edit:
I found out it is not only the queue but when I execute the job as a command the same behavior appears:
I print "Done." as the last statement in the command and after it gets printed the console halts for a few seconds before returning to the console input.
When I comment out the part where the most queries are the issue is gone, like the more queries I use the more I have to wait for the console.
I hope any of you guys know what causes this behavior and how we can fix it.
Thanks in advance
Ok I found the issue.
The problem was that telescope was enabled. So after the code was executed telescope was busy logging all requests and cache hits.
After disabling telescope there was no delay any more.
I have a performance issue in one private API of Proces interface of Work In Process "wip_movProc_priv.processIntf". It takes around 2.5 Minutes for all transaction and
When I run this API in R12.1.3 instance it not take this much amount of time.
wip_movProc_priv.processIntf (p_group_id => p_group_id,
p_proc_phase => WIP_CONSTANTS.MOVE_VAL,
p_time_out => 0,
p_move_mode => 3, --WIP_CONSTANTS.ONLINE,
p_bf_mode => WIP_CONSTANTS.ONLINE,
p_mtl_mode => WIP_CONSTANTS.ONLINE,
p_endDebug => 'T',
p_initMsgList => 'T',
p_insertAssy => 'T',
p_do_backflush => 'F',
x_returnStatus => x_returnstatus);
Please help me.
Thanks,
Yasin Musani
This question has actually far too little detail to be answered.
Typically, the majority of time spent for Oracle EBS code execution is due to few badly performing SQLs.
You can identify the offending SQLs by looking at the AWR or SGA e.g. using Blitz Reports DBA AWR SQL Performance Summary or DBA SGA SQL Performance Summary and would then need to analyze further why these are not executing properly.
I have many tests in laravel app.
They make POST/GET requests and check responses.
Every test is performed using DatabaseMigrations trait.
On my laptop it takes about 20 seconds for every test to be finished.
I do not want to write different repositories for different types of queries so that I can later mock them (extra work).
May be there is a better solution?
You should use in memory testing using SQLite:
'testing' => [
'driver' => 'sqlite',
'database' => ':memory:',
'prefix' => '',
],
In this case, migrations and seeders will create data filled tables really quick.
I have some Delphi 2007 code which runs in two different applications, one is a GUI application and the other is a Windows service. The weird part is that while the GUI application technically seems to have more "to do", drawing the GUI, calculating some stats and so on, the Windows service is consistently using more of the CPU when it runs. Where the GUI application uses around 3-4% CPU power, the service use in the region of 6-8%.
When running them together CPU loads of both applications approximately double.
The basic code is the same in both applications, except for the addition of the GUI code in the Windows Forms application.
Is there any reason for this behavior? Do Windows service applications have some kind of inherent overhead or do I need to look through the code to find the source of this, in my book, unexpected behavior?
EDIT:
Having had time to look more closely at the code, I think the suggestion below that the GUI application spends some time waiting for repaints, causing the CPU load to drop is likely incorrect. The applications are both threaded, meaning the GUI repaints should not influence the CPU load.
Just to be sure I first tried to remove all GUI components from the application, leaving only a blank form. That did not increase the CPU load of the program. I then went through and stripped out all calls to Synchronize in the working threads which were used to update the UI. This had the same result: The CPU load did not change.
The code in the service looks like this:
procedure TLsOpcServer.ServiceExecute(Sender: TService);
begin
// Initialize OPC server as NT Service
dmEngine.AddToLog( sevInfo, 'Service', 'Name', Sender.Name );
AddLocalServiceKeysToRegistry( Sender.Name );
dmEngine.AddToLog( sevInfo, 'Service', 'Execute', 'Started' );
dmEngine.Start( True );
//
while not Terminated do
begin
ServiceThread.ProcessRequests( True );
end;
dmEngine.Stop;
dmEngine.AddToLog( sevInfo, 'Service', 'Execute', 'Stopped' );
end;
dmEngine.Start will start and register the OPC server and initialize a socket. It then starts a thread which does... something to incoming OPC signals. The same exact call is made on in FormCreate on the main form of the GUI application.
I'm going to look into how the GUI application starts next, I didn't write this code so trying to puzzle out how it works is a bit of an adventure :)
EDIT2
This is a little bit interesting. I ran both applications for exactly 1 minute each, running AQTime to benchmark them. This is the most interesting part of the results:
In the service:
Procedure name: TSignalList::HandleChild
Execution time: 20.105963821084
Hitcount: 5961231
In the GUI Application:
Procedure name: TSignalList::HandleChild
Execution time: 7.62424101324976
Hit count: 6383010
EDIT 3:
I'm finally back in a position where I can keep looking at this problem. I have found two procedures which both have about the same hitcount during a five minute run, yet in the service the execution time is much higher. For HandleValue the hitcount is 4 300 258 and the execution time is 21.77s in the service and in the GUI application the hitcount is 4 254 018 with an execution time of 9.75s.
The code looks like this:
function TSignalList.HandleValue(const Signal: string; var Tag: TTag; const CreateIfNotExist: Boolean): HandleStatus;
var
Index: integer;
begin
result := statusNoSignal;
Tag := nil;
if not Assigned( Values ) then
begin
Values := TValueStrings.Create;
Values.CaseSensitive := defDefaultCase;
Values.Sorted := True;
Values.Duplicates := dupIgnore;
Index := -1; // Garantied no items in list
end else
begin
Index := Values.IndexOf( Signal );
end;
if Index = -1 then
begin
if CreateIfNotExist then
begin
// Value signal does not exist create it
Tag := TTag.Create;
if Values.AddObject( Signal, Tag ) > -1 then
begin
result := statusAdded;
end;
end;
end else
begin
Tag := TTag( Values.Objects[ Index ] );
result := statusExist;
end;
end;
Both applications enter the "CreateIfNotExist" case exactly the same number of times. TValueStrings is a direct descendant of TStringList without any overloads.
Have you timed the execution of core functionality? If so, did you measure a difference? I think, if you do, you won't find much difference between them, unless you add other functionality, like updating the GUI, to the code of that core functionality.
Consuming less CPU doesn't mean it's running slower. The GUI app could be waiting more often on repaints, which depend on the GPU as well (and maybe other parts of the system). Therefore, the GUI app may consume less CPU power, because the CPU is waiting for other parts of your system before it can continue with the next instruction.
I've used Delayed_job in the past. I have an old project that runs on a server where I can't upgrade from Ruby 1.8.6 to 1.8.7, and therefore can't use Delayed Job, so I'm trying BackgroundJobs http://codeforpeople.rubyforge.org/svn/bj/trunk/README
I have it working so that my job runs, but something doesn't seem right. For example, if I run the job like this:
jobs = Bj.submit "echo hi", :is_restartable => false, :limit => 1, :forever => false
Then I see the job in the bj_job table and I see that it completed along with 'hi' in stdout. I also see only one job in the table and it doesn't keep re-running it.
For some reason if I do this:
jobs = Bj.submit "./script/runner ./jobs/calculate_mean_values.rb #{self.id}", :is_restartable => false, :limit => 1, :forever => false
The job still completes as expected, however, it keeps inserting new rows in the bj_job table, and the method gets run over and over until I stop my dev server. Is that how it is supposed to work?
I'm using Ruby 1.8.6 and Rails 2.1.2 and I don't have the option of upgrading. I'm using the plugin flavor of Bj.
Because I just need to run the process once after the model is saved, I have it working by using script/runner directly like this:
system " RAILS_ENV=#{RAILS_ENV} ruby #{RAILS_ROOT}/script/runner 'CompositeGrid.calculate_values(#{self.id})' & "
But would like to know if I'm doing something wrong with Background Jobs,
OK, this was stupid user error. As it turns out, I had a call back that was restarting the process and creating an endless loop. After fixing the call back it is working exactly as expected.