Azure redis cache multiple databases - caching

Whenever I am inserting a key in any one db of the redis cache, it gets inserted in all 16 dbs and removing the key from any one db deletes the same from all DBs. Attached is the screen shot showing the same. As per my knowledge, the DBs are independent of each other and at a time any key should be inserted/removed from the current DB in use. Could anyone please explain the observed behaviour?

The Azure Redis Portal Console currently doesn't handle the Select statement correctly (because each command goes out on a new connection), so you are actually doing a get on DB 0. This is a known issue and we are in the process of creating the V2 of the portal console, which will fix this issue. Rough ETA is sometime in the next couple of months.

Related

Debezium Oracle Connectors CDC usage on top of Kafka and Backup / DataGuard

We are trying to use the Oracle Debezium connector (1.9b) on top of Kafka.
We tried to use 2 things regarding snapsot_mode: schema_only and initial.
We use "log.mining.strategy":"online_catalog" (should be the default)
We are using a PDB/CDB Oracle instance on Oracle 19c.
My understanding is that;
The connector create a session to the PDB
It add a shared lock to ensure the structure will not change (shared) for a short duration
the DDL structure is retrieved from the PDB
It create a session to the CDB
It retrieve the last LSN event from CDB
if snapshot == initial, it will use a "JDBC query to retrieve the whole data" from PDB
it does NOT seems to release the initiated session (or rather process) to the PDB
it continues to mines new events from CDB
x. ... it seems to work for a couple of minutes
After a couple of minutes, the number of process increase drastically
The Oracle database freeze, due to an excess of process (that you can follow using v$process)
We had a lot of errors message; like:
A. Failed to resolve Oracle database
B. IO Error: Got minus one from a read call
C. ORA-04025: maximum allowed library object lock allocated
D. None of log files contains offset SCN: xxxxxxx
The message in point D. said it tries to use a offset which was part of "an old" archived log.
Every 30min (or before, it we have more activity), the log is switched from a file to another.
And a backup is occuring every 30minutes which will read the logs, backup it and then: delete it .
It seems to me that Debezium tried to reach past archived log whose was deleted by the backup process.
The process of "deleting previous archived logs" seems "correct" to me, isn't it ?
Why Debezium tries to pass through the archived logs ? because when snapshot==schema_only it should only catch the news events, therefore why using the archived one ?
How can we manage it ?
I hope that if this point is resolved in my use-case, the debezium will stop to "loop" creating new process and ultimately will stop blocking the Oracle DB.
If you have any clues or opinions, don't hesitate to share it. Many thanks
We try to use shared lock and none
We try to limite the number of tables in the scope
I cannot ask to stop the backup, in production it's not a good idea and in test, it seems that the backup is only there to clean the archived logs and avoid ending with completely used storage.

How to schedule BI servers restart every day?

I use Oracle BI 12c and I am facing this strange problem while using it. The thing is there is an ETL process that transfers data from one database to another and that database(dwh) is used for OBIEE.
Even though transfer succeeds and data is present in tables for yesterday, creating analysis on that day returning no data. The only way I get it is restarting BI servers through enterprise manager.
So my question is how can I schedule BI servers to restart every day in the morning or is there any other solution for this problem?
EDIT
Today while I traced log messages when created an analysis, I noticed two error messages repeating. one is the following:
POST
/bisearch/rest/BISearchEventService/postEvent?eventType=web_catalog&eventSubType=web_catalog_object_updated&objectKey=%2f&tenantID=ssi&ownerID=18446744073709551615&sessionID=0000&locale=en_US
and the other one is
[43065] The connection with Cluster Controller
::ffff:10.11.10.19.34713 was lost.
I guess the last one is the issue. Have somebody faced it?

How to clear OBIEE cursor cache (presentation server)

I'm having an issue with a prompt in OBIEE 10g, such that it displays old database value due to the prompt query being serviced from cursor cache (presentation service). For example, if the prompt drop-down shows 1 value initially since there is 1 database row and when i delete this row from database, the prompt still shows the same database value unless i manually delete the cursor cache through analytics
Setting > Administration > Manage sessions > clear cache/cursors
Tried checking OBIEE presentation service config file instanceconfig.xml, however there is no such parameter to permanently disable this cache. I referenced the following link, OBIEE 10G/11G - Presentation Service (Query|Result|Cursor) Cache
Resetting these parameters didn't seem to have any impact on the cursor cache, these are still getting generated and are not cleared after the timeouts set. (I have restarted the OBIEE services after changing these parameters). Am I missing something here.
Would appreciate any pointers to get this done i.e. getting cursor cache cleared/disabled without manual intervention as mentioned above (through Settings > Administration).
At some point I also faced that issue. The presentation cache in OBIEE is a bit shady sometimes.
What I did is to add some dummy comparison on the query of the prompt, involving sysdate with enough precision so it makes each query different to the cache.
It's a bit shabby, but at least you don't need any manual intervention... Maybe it can help you.
Good luck!
You may see this issue if using a Presentation variable as well, rather than a Prompt built using a SQL query.
The problem may be due to shared Presentation Services Query Cache, which means
that even when the user logs out, the query cursor cache is still being shared by other users, so it does not refresh the new data after the user logs in again.
The cache file is in
ORACLE_INSTANCE/tmp/OracleBIPresentationServices/coreapplication_obipsn/obis_temp
See this document for more detail.
You can configure the Virtual Private Database option in the repository
physical database object and mark session variables as Security Sensitive in
the repository to make the query cache not shared among users. See this
documentation for more detail.

Stored Procedures Overwhelming Oracle.EXE On Oracle 11g On Windows

Until very recently we ran a 3rd party HR database on an Oracle Unix environment. I have additionally set up various web services that hit stored procedures to carry out a few bespoke processes for our users, and all ran well for years.
However, now that we have moved to Oracle on a Windows environment there is suddenly a big problem.
The best example I have is a VB.Net solution that reads in a 2000 row CSV of employees into a datatable, runs a couple of stored procedures to bring back Post Id etc, populates a database table with the results, then feeds it all back out into a new CSV. This process used to take 1-2 minutes to complete on Unix. It now takes well over 2 hours and kills the server!
The problem manifests by overwhelming the CPU on the database server. Any stored procedure call sends Oracle.EXE into overdrive, completely max-ing out the CPU core that it's using such that no other stored procedures can be run and everything grinds to a halt.
We have run Oracle Enterprise Manager, which suggested the creation of some indexes etc, but nothing will improve the issue. Like I say, the SQL ran fine and swiftly for years, and it hasn't changed at all.
Does anybody know what could be causing this? I am completely at a loss.
The way I see it, it must either be:
1. A CPU/hardware issue (but we have investigated, added extra cores etc to no avail)
2. An Oracle configuration issue?; or
3. An issue with the 3rd party database (which is supposedly identical to what it was on Unix).
Thanks to anyone who read this far.
P.S. I've had a Stack Overflow user account for years but can't get logged into it any more. Back to noobie status for me!

database stopped on running 500 quires per second

I built a chat application in which chatting page is loaded per every 1second through AJAX,
And i used DB2 express-c database for storing messages.
one day 500 user at a time used this app at a that time database is stopped working.
Is their any effect on database by running 500 quires at a time in one second.
please tell how to run quires for every second without effecting the database functionality.
The red mark on the DB2 icon means that the instance stop working. This issue should be related to a memory problem or something else.
You have to check the db2diag.log file, and check for message. It is highly probable that you have information at the time when the instance stopped. The first failrue data capture feature allows to recopile all that information when a crash occurs, in the diag directory.
In order to fix the problem, you just need to restart DB2. You can create a task that check if the instance is up, and if not, try to restarted. However, this is the wrong way to keep DB2 up.
You should see what happened at the time when DB2 crashed. Probably, the memory for the 500 agents was too high, and DB2 could not reserve more memory.
Are you running other processes in the same DB2 server? probably one of them corrupt the DB2 memory.

Resources