What AQ$_PLSQL_NTFNnnnn scheduler jobs are used for? - oracle

I don't use Advanced Queueing at all, but amount of AQ$_PLSQL_NTFNnnnn scheduler jobs grows continuously.
Currently there are 8 such jobs. And because of them - I need to refresh maximum simultaneous jobs running count.
About 2 months ago it was ok with limit of 10, currently I have limit 15 and because of 8 "unnecessary" (at least for me) that jobs - I need to increase it to even 20 or 25 :-S
So, what are they used for? Can I just drop/disable them?
UPD: I've increased number of simultaneous jobs to 25 and in this night the amount of AQ... jobs rose up to 25 :-S Is it a joke?!

It sounds to me like something is using AQ somewhere in your database.
I googled around a bit, and there is some possibly useful information here - http://www.ora600.be/node/5547 - mostly the hidden parameter _srvntfn_max_concurrent_jobs that apparently limits the total number of jobs running for notifications.
Information seems to be hard to come by, but apparently notifications go into the table sys.alert_qt, so you could try having a look in there to see what appears.
You could also have a look in the ALL_QUEUES and other ALL_QUEUE* tables to see if there are any queues on your database you are not aware of.
I am assuming you are using Oracle 11gR1 or 11gR2?

When using a PL/SQL Callback function for processing the AQ queue, we have seen these jobs being generated. You can check this table to find any registered subscriptions:
select * from dba_subscr_registrations;
More about AQ PL/SQL Callback

Related

Laravel, Queue, Horizon and >10 Servers (Workers) - Creating several million jobs takes extremely long

I need to create several 10 million jobs.
I have tried it with for-loops and Bus::batch([]) and unfortunately the creation of the jobs takes longer than the processing of the jobs by the 10 servers/workers. That means the workers have to wait until the job shows up in the database (redis etc). With redis-benchmark I could learn that Redis is not the problem.
Anyway... is there a way to create jobs in BULK (not batch)? I'm just thinking of something like:
INSERT INTO ... () VALUES (), (), (), (), ...
Anyway, creating several million jobs in a for-loop or in batch seems to be very slow for some reason. Probably because it's always just 1 query at a time and not an "upsert".
For any help I would be very grateful!
Writing a million records will be kind of slow at any condition. I'd recommend maximize your queue performance using several methods:
Create job that will create all other jobs if possible
Use only QUEUE_CONNECTION=redis for your queues as redis stores data in RAM which is fastest possible.
Create your jobs after response was processed already

Is it normal that CockroachDB Serverless uses 500K RUs in 19 hours with no connections?

I set up a CockroachDB cluster for a school project. The only thing I have done is created 1 database with 1 table with 1 instance of 6 rows, but when I look at the dashboard I have already used 500K RUs. This seems like a huge amount to me, but I'm new to cloud databases so I don't know if this is normal behavior or not. I'm just worried I will run out of RUs without doing anything on the database. In this image the graph of the RU usage can be seen when there are no connections and when the hub wasn't opened. Can anyone maybe clarify this for me?
I think this explanation is more likely to be the reason:
https://www.cockroachlabs.com/docs/cockroachcloud/serverless-faqs.html#my-cluster-doesnt-have-any-current-co[…]ing-rus-when-there-are-no-connections
To summarize, the monitoring console uses up some RUs. So if you have a browser tab open with the console, it will use RUs even if you don't have any connections open.
As that FAQ says, this can use ~8 RUs per second. Over 19 hours, that is about ~540,000 RUs total. The solution is to not leave the console open.
On the stats point, note that auto-stats collection is only triggered when data in the table changes.
I believe what you're seeing is the Automatic Metric collection. You can read more about it on this FAQ.

Azure SQL Data IO 100% for extended periods for no apparent reason

I have an Azure website running about 100K requests/hour and it connects to Azure SQL S2 database with about 8GB throughput/day. I've spent a lot of time optimizing the database indexes, queries, etc. Normally the Data IO, CPU and Log IO percentages are well behaved in the 20% range.
A recent portion of the data throughput is retained for supporting our customers. I have a nightly maintenance procedure that removes obsolete data to manage database size. This mostly works well with the exception of removing image blobs in a varbinary(max) field.
The nightly procedure has a loop that sets 10 records varbinary(max) field to null at a time, waits a couple seconds, then sets the next 10. Nightly total for this loop is about 2000.
This loop will run for about 45 - 60 minutes and then stop running with no return to my remote Sql Agent job and no error reported. A second and sometimes third running of the procedure is necessary to finish setting the desired blobs to null.
In an attempt to alleviate the load on the nightly procedure, I started running a job once every 30 seconds throughout the day - it sets one blob to null each time.
Normally this trickle job is fine and runs in 1 - 6 seconds. However, once or twice a day something goes wrong and I can find no explanation for it. The Data I/O percentage peaks at 100% and stays there for 30 - 60 minutes or longer. This causes the database responsiveness to suffer and the website performance goes with it. The trickle job also reports running for this extended period of time. If I stop the Sql Agent job, it can take a few minutes to stop but the Data I/O continues at 100% for the 30 - 60 minute period.
The web service requests and database demands are relatively steady throughout the business day - no volatile demands that would explain this. No database deadlocks or other errors are reported. It's as if the database hits some kind of backlog limit where its ability to keep up suddenly drops and then it can't catch up until something that is jammed finally clears. Then the performance will suddenly return to normal.
Do you have any ideas what might be causing this intermittent and unpredictable issue? Any ideas what I could look at when one of these events is happening to determine why the Data I/O is 100% for an extended period of time? Thank you.
If you are on SQL DB V12, you may also consider using the Query Store feature to root cause this performance problem. It's now in public preview.
In order to turn on Query Store just run the following statement:
ALTER DATABASE your_db SET QUERY_STORE = ON;

Reduce execution time of Oracle stored procedure

I have a db job running daily that manages to process 10.000 rows from a table of 3.500.000 rows, in three hours.
Tuning the main cursor's select statement can only save me 30 minutes, but I need to reduce the job running time from 3 hours to 10-15 minutes.
I have to state that there is only the main loop for the cursor and for each record there are calls to external systems, in order to get or send data, so this is an overhead I cannot control. The time for each record to be processed after it is fetched is a little less than a second and that is not acceptable ...
Is there something I could do? All ideas are more than welcome!
Imho, you can submit job for each query to external system or try to run in parallel, may be you can use ADVANSED QUEUE. Explain: send each selected row to queue, and quering to external will proceed with AQ
You may try to process rows in parallel.

How can I distribute a task between many process in Ruby?

I have a ruby daemon that selects 100 records from database and do a task with it.
To make it faster I usually create 3 instances of the same daemon. And each one selects diferents data by using mysql LIMIT and OFFSET.
The problem is that sometimes a task is performed 2 or 3 times with the same data record.
So I think that trusting only on database LIMIT and OFFSET is not enough ... since 2 or more daemons can actually collects the same data at the same time sometimes.
How can I do it safely? Avoiding 2 instances to select the same data
Daemon 1 => selects records from 1 to 100
Daemon 2 => selects records from 101 to 200
Daemon 3 => selects records from 201 to 300
Rather than rolling your own solution, you might want to look at existing solutions for processing background jobs like Resque (a personal favorite). With Resque, you would queue a job for each of your rows using a trigger that makes sense in your application (it's hard to say without any context) for example a link on your website. At all times you would keep X number of workers running (three in your case) and Resque will do the queue management work for you. Resque uses Redis as a backend, so it supports atomic push/pop out of the gate (no more double-processing).
Resque also comes with a very intuitive and easy to use web interface for monitoring your jobs and workers.

Resources