Oracle ORDS: Get request returns old data, then after period of time the changed data - oracle

I am having a problem with the Oracle Rest Data Services (short ORDS) and I can't find a solution.
The Problem is as follows:
We are using ORDS via a TomCat Webserver and I have 2 Endpoints defined, one to Update a dataset and one to get all datasets from this table.
If I update the value via my Endpoint the change is written in the Table, but if I try to get the table with this change ORDS only response with the old not changed table. After a certain period of Time while constantly trying to get the change it repondes with the expected values. (happens after max 1 minute, can be earlier).
Because of this behaviour I accused some type of caching, but I cant find no configuration in the oracle database or on the TomCat.
Another Point for this theory was that I logged what happens in my GET procedure and found that only the one request with the correct values gets logged, like the others didnt even happen ..
The Request giving me the old value are coming back in the 4-8 ms range while the request with the correct data is in the 100-200 ms.
Ty for your help :)
I tried logging what happens, but I got that only the request with the fresh values was logged.
I tried to restart the TomCat Webserver to make sure that the cache is cleared, but this didnt fix the Problem
I searched for a configuration in ORDS or oracle where a cache would be defined, but it was never set.
I tried to set the value via a SQL update and not an endpoint, but even here I get the change only delayed

Do you have a full overview of the communication path? Maybe there is a proxy between?
When the TomCat has no caching configuration and you restartet the webserver during your tests and still have the same issue, then there is maybe more...
Kind regards
M-Achilles

Related

Delete requests succeeded however record exist in AWS ElasticSearch/OpenSearch

We have AWS ElasticSearch/OpenSearch where we have deleted some records as part of migration. As per logs and spot checking few records, we have successfully sent request to ElasticSearch/OpenSearch to delete record based on Id with Index name and index type by using JestClient execute API however record still exist which is creating an operational issue in the system. As per logs, the delete request did not create any exception and response indicated that delete request is successful (JestResult isSucceeded is true). Currently, we have operational issues due to these non deleted records from ES. Hence, we are deleting these manually whenever these gets surfaced by our users which is not efficient approach and causing operational pain.
Could someone provide inputs on what scenarios can cause this kind of issue in ElasticSearch/OpenSearch? How to debug this scenario and how to avoid this if anyone has faced this issue?

Error 502 with Laravel when exporting to Excel on Azure Web App Linux

I have a Laravel App running on Azure Web App Linux service, all running nice and smoothly until I reach a feature that exports a query to an XLS for download. Then I receive the ERROR 502.
On my local environment works normally, I can export the query to XLS with no issues, it is not a large query, just a few rows.
In the same app, I have a function that exports to XLS just 1 row at a time and works fine, so it is just when I go for a larger(ish) query.
Any ideas? I have tried scaling up, restarting the app, apache, changed .ini (via .htaccess to increase execution time).
There is no trace in the logs either, there is something about the container crashing but cannot trace it to this particular error.
Ok, managed to solved it... was not straight forward at all. It has to do with the size of the query, even tough is not big by any means (a couple thousands max) raising memory limit to 1024M or further ended up in 502 Error. Decided to try different and moved from Laravel Excel to Fast-Excel which is less featured but man... it works. Now everything downloads perfectly. In case you are having this issue give fast-excel a try.

Where to set Hibernate FlushMode?

I have an IntelliJ project using Spring MVC and with Hibernate FlushMode set to auto as default.
The problem is: when I try to delete an object from the db, using the web interface, it works fine, but after the third time I do that, the interface becomes not responsive (even though Hibernate receives the command to delete that certain object with that certain ID) and I have to reboot my WildFly server.
Any idea where I can change that? Is there a way to set it in a configuration file or I have to invoke a method to set it? And even if the problem is the FlushMode itself.
Regards
UPDATE: After testing several things, I think I finally found the root of the problem. If I access the db for more than three times consecutively, the server becomes unresponsive. How can I overcome this?
UPDATE#2: I found that the problem was that in the Dao the connection was opened but never closed (there wasn't a line "session.close()" just to be clear). And that was the reason After three pooling from the db, was it either to add items, delete them or just get infos, the server became unresponsive. Now everything works perfectly!
I actually didn't get your question properly.
The third time you try to delete something from your frontend, the web page gets stuck? The third time you make a delete request is the request stuck with hibernate? Could be a little clear about what actually is happening/or what issue you are facing.
To answer your question - how to setFlushMode in hibernate:
In case you using EntityManager -> entityManager.setFlushMode(FlushModeType.) -> JPA supports AUTO and COMMIT
In case you using SessionFactory:
sessionFactory.setHibernateFlushMode(FlushMode.) -> From hibernate 5
OR
sessionFactory.setFlushMode(FlushMode.) -> Before Hibernate 5
(NOTE: Please check the docs for accurate version of hibernate from which setFlushMode is deprecated.
Hibernate supports 4 modes of flush -> AUTO/COMMIT/ALWAYS/MANUAL

Is it possible to make a runtime db connection and use it in Schema, DB and models without effecting configs?

I want to use dynamic databases on runtime without effecting config/database.php because of concurrent users.
I have a main db with a table that contains reference to several other dbs. Now at runtime I need to not only connect to those dbs but also may want to run migrations on them.
I am aware that this is possible by having a second connection entry in config.database.connections but I have a feeling that if two users hit the server at the same time, the physical config file changes may create a conflict.
I also read (and also experimented) that you can edit the second connection using below code at runtime:
\Config::set('database.connections.mysql2.database', 'somedynamicdb');
DB::purge('mysql2');
But I fear that if it persists changes for different users, then it may conflict for concurrent users. And if it does not persist changes, then it wont work for migrations.
I want to understand/know two things specifically:
What is the scope of this above code (i.e. Config::set() call)? Does it persist over different user calls to the server?
If I call migrations using Artisan::call('migrate') with a --database=connectionname clause, right after I change the db name in connectionname, will that use the dynamically set database or the physical config value?
UPDATE
Also worth noting that a call to Artisan::call('migrate') with a --database=connectionname, will make the new connection persist for the rest of your app call.
See here for details:
https://github.com/laravel/framework/issues/28253
Config::set will only apply for the request for which it was set, won't apply to any other requests, and will not persist beyond the request. If you're not processing a request (e.g. a CLI command) then it won't affect anything beyond the current PHP process.
As for Item #2, if you're invoking from the command line, you can just do DB_CONNECTION=connectionname php artisan migrate. If you need to invoke the artisan command from code, using Config::set is still the right way to go.
We use connection created on the fly here all time and works very well. We setup this on Middleware that we included after authentication and is only valid on the user current user request based on login information.

AWS RDS database can't read record that was just written to database

I'm seeing an error with some Laravel code that uses an AWS RDS database. The code writes a record to the database and then immediately does a search to load that record using the primary key and gets no results.
If I try it manually afterwards I find the record. If I insert a 1-second sleep in the code it works correctly.
I've tried this using Laravel's separate settings for read and write hosts. I've also tried setting them to the same host and only using one host. The result is always the same. However other environments with the same configuration do not have the error.
Is there an option in RDS that needs to be changed to have the record available immediately after it's written.
The error is due to the mySQL master-slave replication lag.
A common mistake is to use a mySQL cluster and then perform a read
immediately after a write.
Since the read occurs on one of the slave/read hosts and the write occurs on the master, the data would not be replicated at the time of the read.
There are a couple of ways to rectify the error:
The read immediately after must be performed on the master (not the slave). Even though you've mentioned that you changed it to a single host, often people make a mistake while switching the connection. Refer this SO post to properly switch connections in Laravel
An easier way may be to use the sticky database option in Laravel. Beware: this may cause performance issues if not used carefully for only the use case you desire. From the docs:
The sticky option is an optional value that can be used to allow the
immediate reading of records that have been written to the database
during the current request cycle.
If the sticky option is enabled and a "write" operation has been
performed against the database during the current request cycle, any
further "read" operations will use the "write" connection.
The most "non-obvious" way is to NOT perform a read immediately after a write. Think about whether this can be avoided depending on your use case.
Other methods: refer this SO post

Resources