database stopped on running 500 quires per second - ajax

I built a chat application in which chatting page is loaded per every 1second through AJAX,
And i used DB2 express-c database for storing messages.
one day 500 user at a time used this app at a that time database is stopped working.
Is their any effect on database by running 500 quires at a time in one second.
please tell how to run quires for every second without effecting the database functionality.

The red mark on the DB2 icon means that the instance stop working. This issue should be related to a memory problem or something else.
You have to check the db2diag.log file, and check for message. It is highly probable that you have information at the time when the instance stopped. The first failrue data capture feature allows to recopile all that information when a crash occurs, in the diag directory.
In order to fix the problem, you just need to restart DB2. You can create a task that check if the instance is up, and if not, try to restarted. However, this is the wrong way to keep DB2 up.
You should see what happened at the time when DB2 crashed. Probably, the memory for the 500 agents was too high, and DB2 could not reserve more memory.
Are you running other processes in the same DB2 server? probably one of them corrupt the DB2 memory.

Related

Debezium Oracle Connectors CDC usage on top of Kafka and Backup / DataGuard

We are trying to use the Oracle Debezium connector (1.9b) on top of Kafka.
We tried to use 2 things regarding snapsot_mode: schema_only and initial.
We use "log.mining.strategy":"online_catalog" (should be the default)
We are using a PDB/CDB Oracle instance on Oracle 19c.
My understanding is that;
The connector create a session to the PDB
It add a shared lock to ensure the structure will not change (shared) for a short duration
the DDL structure is retrieved from the PDB
It create a session to the CDB
It retrieve the last LSN event from CDB
if snapshot == initial, it will use a "JDBC query to retrieve the whole data" from PDB
it does NOT seems to release the initiated session (or rather process) to the PDB
it continues to mines new events from CDB
x. ... it seems to work for a couple of minutes
After a couple of minutes, the number of process increase drastically
The Oracle database freeze, due to an excess of process (that you can follow using v$process)
We had a lot of errors message; like:
A. Failed to resolve Oracle database
B. IO Error: Got minus one from a read call
C. ORA-04025: maximum allowed library object lock allocated
D. None of log files contains offset SCN: xxxxxxx
The message in point D. said it tries to use a offset which was part of "an old" archived log.
Every 30min (or before, it we have more activity), the log is switched from a file to another.
And a backup is occuring every 30minutes which will read the logs, backup it and then: delete it .
It seems to me that Debezium tried to reach past archived log whose was deleted by the backup process.
The process of "deleting previous archived logs" seems "correct" to me, isn't it ?
Why Debezium tries to pass through the archived logs ? because when snapshot==schema_only it should only catch the news events, therefore why using the archived one ?
How can we manage it ?
I hope that if this point is resolved in my use-case, the debezium will stop to "loop" creating new process and ultimately will stop blocking the Oracle DB.
If you have any clues or opinions, don't hesitate to share it. Many thanks
We try to use shared lock and none
We try to limite the number of tables in the scope
I cannot ask to stop the backup, in production it's not a good idea and in test, it seems that the backup is only there to clean the archived logs and avoid ending with completely used storage.

How to schedule BI servers restart every day?

I use Oracle BI 12c and I am facing this strange problem while using it. The thing is there is an ETL process that transfers data from one database to another and that database(dwh) is used for OBIEE.
Even though transfer succeeds and data is present in tables for yesterday, creating analysis on that day returning no data. The only way I get it is restarting BI servers through enterprise manager.
So my question is how can I schedule BI servers to restart every day in the morning or is there any other solution for this problem?
EDIT
Today while I traced log messages when created an analysis, I noticed two error messages repeating. one is the following:
POST
/bisearch/rest/BISearchEventService/postEvent?eventType=web_catalog&eventSubType=web_catalog_object_updated&objectKey=%2f&tenantID=ssi&ownerID=18446744073709551615&sessionID=0000&locale=en_US
and the other one is
[43065] The connection with Cluster Controller
::ffff:10.11.10.19.34713 was lost.
I guess the last one is the issue. Have somebody faced it?

Azure redis cache multiple databases

Whenever I am inserting a key in any one db of the redis cache, it gets inserted in all 16 dbs and removing the key from any one db deletes the same from all DBs. Attached is the screen shot showing the same. As per my knowledge, the DBs are independent of each other and at a time any key should be inserted/removed from the current DB in use. Could anyone please explain the observed behaviour?
The Azure Redis Portal Console currently doesn't handle the Select statement correctly (because each command goes out on a new connection), so you are actually doing a get on DB 0. This is a known issue and we are in the process of creating the V2 of the portal console, which will fix this issue. Rough ETA is sometime in the next couple of months.

How to trace errors logs of the Stored Procedure in PROD environment?

I am not an expertise in oracle DB. But I am curious to know that how can we check the logs of particular Stored procedure when it gets executed.
I check the trace folder but I dont know how and which file I have to analyse.
When I checked the UNIX logs it shows timeout error . It seems it did not get the response form one of the procedure. And after 2-3 hrs it get processed and sometimes it dosent. It should have done that job in 30 mnts max. I am not sure if DB is culprit or WEB SERVER (WAS) .
In extreme case I ask for DB restart and WAS restart and this solves our problem .
Is it possible to trace the problem? I am in PROD environment . The same is not behavior in UAT or in SIT environment
Could this be the problem from WAS or DB side? Please throw some light on this .
Thanks
I think what you want is DBMS_TRACE You'll have to enable tracing in your session and execute the procedure manually.
If by chance, this procedure is being executed by ORACLE scheduler you may find some info in alert log. I'd suggest checking that anyway.
If the procedure used to run in 30min and now takes 2h to complete and if there were no changes to it then the problem is not in the procedure.
I'd suggest you check for unusable indexes, redo log switches, blocking sessions, table locks etc. hard to say exactly without knowing the procedure. You say it's a prod environment. DBA must surely have some performance monitoring in place. If, by chance, you have Oracle Enterprise Manager go and take a look at what is happening while the procedure is being executed.

How to troubleshoot Oracle database server errors?

My team inherited an Oracle-based web application and they are fairly inexperienced with Oracle database servers.
The Oracle 10g server is running on a Windows 2003 Server with plenty of disk space and from time to time, all connectivity is lost, the application stops working, not even SQL Plus is able to connect to the database server.
But when we check the Windows Service manager, it says that the service is up and running. A restart usually fixes the problem, but we need to properly troubleshoot it so we know what's causing it and so we can avoid it to happen anymore.
Where should we start looking for clues? What are the criticial log files we should be investigating?
On the server you should have an environment variable called ORACLE_HOME which indicate the root of the Oracle install. Most likely the Oracle trace/dump folders will be under there. Search for a folder called "bdump" (background dump). That's where the main log file, knows as the alert log, will be, as well as trace files generated by background processes. There will be an adjacent file called "udump" which will contain any trace files generated by user processes.
However, my real advice is that you should either hire someone who knows Oracle or get Oracle Support involved.
The alert log would be the first file to check.
It will probably be in $ORACLE_HOME/admin/bdump and (probably) called alert_DATABASE-SID.log
It contains most of the important actions that the database does, as well as any important errors that occur.
I have to agree with cagcowboy. Check your alert logs for errors. If no errors then maintain a sysdba login into the database and when it hangs, attempt to do a hang analysis. See metalink note 215858.1 on hanganalyze.
Have you tried tnsping? We've occasionally run into problems with the listener that requires an assist from our DBA. tnsping is the diagnostic tool we use to do triage.
I would recommend hiring an experienced Oracle DBA if at all possible.
check the alert log to see how the Db is structured. sometimes badly set parameters make hangs or slow performance. or you can shutdown and start in mount mode, then check the v$parameter values for problems. setting total memory is very important.

Resources