Payara docker with oracle db connection - oracle

I have a java application running in payara docker which connects to oracle database with connection pool. A particular functionality button in ui when clicked calls a database stored procedure to display some rows as result. When this functionality button is triggered multiple times in a single user sesion, it returns duplicate results. First time if it returns 2 rows( expected) , 2nd time would be 4 (2 extected results and 2 possibly from previous run) and then 6 and so on..i checked the database stored procedure which is running fine without duplication when directly run in db..Can you please help on this

This looks to be a flaw in UI or data layer. You can do the following:
perform a Client SQLNET trace and check how many times the rows gut shipped.

Related

Can not load large amounts of data with DataGrip or IntelliJ to PostgreSQL

I use datagrip to move some data from a mysql installation to another postresql-database.
That worked for 3 other tables like a charm. The next one, over 500.000 rows big, could not be imported.
I use the function "Copy Table To... (F5)".
This is the log.
16:28 Connected
16:30 user#localhost: tmp_post imported to forum_post: 1999 rows (1m
58s 206ms)
16:30 Can't save current transaction state. Check connection and
database settings and try again.
For other errors like wrong data types, null data on not null columns, a very helpful log is created. But not now.
The problem is also relevant when using the database plugin for IntelliJ-based IDEs, not only DataGrip
The simplest way to solve the issue is just to add "prepareThreshold=0" to your connection string as in this answer:
jdbc:postgresql://ip:port/db_name?prepareThreshold=0
Or, for example, if you a using several settings in the connection string:
jdbc:postgresql://hostmaster.com:6432,hostsecond.com:6432/dbName?&targetServerType=master&prepareThreshold=0
It's a well-known problem when connecting to the PostgreSQL server via PgBouncer rather than a problem with IntelliJ itself. When loading massive data to the database IntelliJ splits data into chunks and loads them sequentially, each time executing the query and committing the data. By default, PostgreSQL starts using server-side prepared statements after 5 execution of a query.
The driver uses server side prepared statements by default when
PreparedStatement API is used. In order to get to server-side prepare,
you need to execute the query 5 times (that can be configured via
prepareThreshold connection property). An internal counter keeps track
of how many times the statement has been executed and when it reaches
the threshold it will start to use server side prepared statements.
Probably your PgBouncer runs with transaction pooling and the latest version of PbBouncer doesn't support prepared statements with transaction pooling.
How to use prepared statements with transaction pooling?
To make prepared statements work in this mode would need PgBouncer to
keep track of them internally, which it does not do. So the only way
to keep using PgBouncer in this mode is to disable prepared statements
in the client
You can verify that the issue is indeed because of the incorrect use of prepared statements with the pgbouncer via viewing IntelliJ log files. For that go to Help -> Show Log in Explorer, and search for "org.postgresql.util.PSQLException: ERROR: prepared statement" exception.
2022-04-08 12:32:56,484 [693272684] WARN - j.database.dbimport.ImportHead - ERROR: prepared statement "S_3649" does not exist
java.sql.SQLException: ERROR: prepared statement "S_3649" does not exist
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2440)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2183)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308)
at org.postgresql.jdbc.PgConnection.executeTransactionCommand(PgConnection.java:755)
at org.postgresql.jdbc.PgConnection.commit(PgConnection.java:777)

Azure redis cache multiple databases

Whenever I am inserting a key in any one db of the redis cache, it gets inserted in all 16 dbs and removing the key from any one db deletes the same from all DBs. Attached is the screen shot showing the same. As per my knowledge, the DBs are independent of each other and at a time any key should be inserted/removed from the current DB in use. Could anyone please explain the observed behaviour?
The Azure Redis Portal Console currently doesn't handle the Select statement correctly (because each command goes out on a new connection), so you are actually doing a get on DB 0. This is a known issue and we are in the process of creating the V2 of the portal console, which will fix this issue. Rough ETA is sometime in the next couple of months.

Execute Long Stored Procedure via Web page

I have this web application (using Struts 1.x) running on Weblogic 10.3.3 that read files and input the data to a database (Oracle 11g DB). And also through this web application, I can run a stored procedure in the database. The problem is now after a few changes on the stored procedure, it requires a pretty long time to finish executing (25-40 minutes).
The page to run the stored procedure now will load, and even after the stored procedure has finished (checked through session browser), the page will still in loading state and after sometimes it will display timeout error.
Is there any way for a web page to run a stored procedure that takes long time to finish (60 minutes)? Should I make changes to the application code or the Weblogic setting?
Thanks for the responses.
You should never ever ever ever leave the user waiting. When I last dealt with that:
Client(user/browser) tells server "Start procedure"
Server spawns thread which starts the procedure.
Server tells Client(user/browser) "I is done now. come back later. Mabee I emails yoo"
Server completes procedure.
Server emails user that procedure is done.
Secondary thread terminates.
Server eats Sir Robin's minstrels and there is much rejoicing.

What should be approach?

Try to be more clear, I'm in lack of ideas in this problem, even it sounds like a classic.
My application is running on weblogic 10.3.3 application server, and for database I am using Oracle database 11g. My problem is that there is table in db, let's say "user.", there is column, let's say "columnA", in this table. This table is updating by some module of application.
What I want if when value of column is "abc.", then I have to show alert to console(IP). {IP can be retrieved from DB as it is configured in DB. this ip will be other linux system other than linux machine where oracle database is installed.} Updating is continuously done on my table from module of application. Please tell me from where should I start?, what should I read. I am not able to understand what should be approach. Any help is much appreciated.
A trigger on the table can call UTL_HTTP to communicate with another machine (eg call a RESTful API).
The architectural questions are :
This will happen PRIOR to the commit so you may get false alerts if a change is rolled back
If you wait for a response, it will slow the system down.
What do you do if you get an non-standard response (eg the other server isn't available)

oracle insert with many bind variables over WAN is very slow

we have problem with slow insert statement using 40 bind variables as columns values. It runs several seconds when running over WAN link and we were not able to nail down the problem, until we used network analyzer. Every single execution of this prepared query required exchanging over 120 packets between client and server to complete. What we can do to to execute it more efficiently?
When I run the same insert with actual parameters(without bind variables) from the same host it completes in tens of miliseconds. There is nothing special about the parameters, there are only short varchars and numbers.
We are using Delphi 6 with ODAC, we tried various versions of ODAC and Oracle client with no avail. On server side we tried both Oracle 10 and 11.
TNS is not designed to work well over WAN.
If it's possible, rewrite your application to use other network layer, like HTTP, which is more efficient.
You can do it using Oracle HTTP Server, for instance.
Have you looked at External Tables? Replaces the need for SQL Loader
Requires Oracle 9i or above though

Resources