Does Apache Derby have a graceful shutdown? - derby

I have a program that starts up the Derby Network Server using the NetworkServerControl API and when I want to shut down the network server, I want to be done gracefully so that no new transactions will start, but all current transactions are given a set amount of time to finish. I see that the API has a shutdown command, but it does not say anything about current ongoing transactions from client connections and whether or not it just kills the process immediately. Does the Derby Network Server handle current and new transactions automatically, or is there a method to stop new client connections and transactions?
I was thinking (and this might be completely wrong) that I could use call setMaxThreads(0) to stop JDBC client connections, but I am not sure what will happen to ongoing transactions if I do.
Thanks in advance.

Consider using table locks to do this. Write a special program which takes table locks on all your important tables, then shuts down the network server.
http://db.apache.org/derby/docs/10.8/ref/rrefsqlj40506.html

DriverManager.getConnection("jdbc:derby:MyDbTest;shutdown=true");
This will close the database gracefully.

Related

Polling database after every 'n' seconds vs CQN Continuous Query Notification - Oracle

My application currently polls database every n seconds to see if there are any new records.
To reduce network round trips, and CPU cycles of this polling i was thinking to replace it with CQN based approach where database will itself update subscribed application if there is any Commit to database.
The only problem is what if Oracle was NOT able to notify application due to any connection issue between oracle and subscribed application or if the application was crashed or killed due to any reason? ... Is there a way to know if application have missed any CQN notification?
Is polling database via application code itself the only way for mission critical applications?
You didn't say whether every 'n' seconds means you're expecting data every few seconds, or you just need your "staleness" to as low as that. That has an impact on the choice of CQN, because as per docs, https://docs.oracle.com/en/database/oracle/oracle-database/12.2/adfns/cqn.html#GUID-98FB4276-0827-4A50-9506-E5C1CA0B7778
"Good candidates for CQN are applications that cache the result sets of queries on infrequently changed objects in the middle tier, to avoid network round trips to the database. These applications can use CQN to register the queries to be cached. When such an application receives a notification, it can refresh its cache by rerunning the registered queries"
However, you have control over how persistent you want the notifcations to be:
"Reliable Option:
By default, a CQN registration is stored in shared memory. To store it in a persistent database queue instead—that is, to generate reliable notifications—specify QOS_RELIABLE in the QOSFLAGS attribute of the CQ_NOTIFICATION$_REG_INFO object.
The advantage of reliable notifications is that if the database fails after generating them, it can still deliver them after it restarts. In an Oracle RAC environment, a surviving database instance can deliver them.
The disadvantage of reliable notifications is that they have higher CPU and I/O costs than default notifications do."

How resilient is reporting to Trains server?

How would Trains go about sending any missing data to the server in the following scenarios?
Internet connection breaks temporarily while running an experiment
Internet connection breaks and doesn't come back before the experiment ends (any manual way to send all the data that was missed?)
The machine running Trains server resets in the middle of an experiment
Disclaimer: I'm part of the allegro.ai Trains team
Trains will auto retry to send logs, basically forever. The logs/metrics are sent in a background thread so it should not interfere with execution. You can set the backoff parameter, to control the retry frequency, by adjusting the sdk.network.iteration.retry_backoff_factor_sec parameter in your ~/trains.conf file, see example here
The experiment will try to flush all metrics to the backend when the experiment ends, i.e. the process will wait at_exit until all metrics are sent. This means if the connection was dropped, it will retry until it is up again. If the experiment was aborted manually, there is no way to capture/resend those lost metric reports. That said with the new 0.16 version, offline mode was introduced. This way one can run the entire experiment offline, then later report all logs/metrics/artifacts.
The Trains-Server machine is fully stateless (the states themselves are stored in the databases on the machine) this means that from the experiment perspective, the connection was dropped for a few minutes and then it's available again. To your question, if the Trains-Server restarted, it is transparent to all experiments and they continue as usual, no reports will be lost.

Should I explicitly close RethinkDB connections?

I'm a little hazy on how connections in RethinkDB work. I'm opening a new connection every time I execute queries without closing them once the queries finish.
Is this a good practice? Or should I be explicitly closing connections once queries are finished?
(I'm using the JS driver. I don't believe the documentation speaks to this)
[edited cuz the previous post title was vague]
You should explicitly close connections, otherwise you will exhaust the database server. I'm assuming you are running node.js, which will keep connections until you kill the application.
Preferrably you would use a pool, to lessen the overhead of connecting. For a pre-made solution, look into rethinkdbdash which is basically the same API as the official one, but with builtin pooling.

Ruby Sequel + PG: Is (DB.disconnect + signal/trap/exit) necessary in Sequel-backed apps?

Coming from other environments (eg nodejs), it was necessary to close the db connections after the server finishes and closes. I've searched Sequel's source + online Sequal examples. I've seen .disconnect mentioned mostly with just forks and threads.
Is is necessary to manually call DB.disconnect in a signal trap at app exit? Or are the connections closed automatically?
I'm only running a simple Rack app, w/o app preloading in Unicorn, only Postgresql connections.
Ruby will automatically close the database connection sockets on process shutdown, so you don't need to call DB.disconnect manually (though you can if you want to).
When your PHP script is finished, PHP will automatically perform garbage collection on the objects and resources you used previously. As result of this, plus the fact that most scripts are over in less than a tenth of a second, it is generally not necessary to explicitly disconnect from your MySQL server or to hand-free the space allocated to your SQL results.
if you wish to know:
$db->close();
which closes the current connection to the database (if used on your database variable)
Only close the connection to the database entirely if you are done with your SQL Connection for that script

Inactive session in Oracle by JDBC

We have a web service written in Java and is connecting to Oracle database for data extraction. Recently, we encountered too many inactive session in Oracle database from JDBC which is our web service.
We are very sure that all the connection is being closed and set to null after every process.
Can anyone help us in this? Why is it causing inactive session in the database and what can be the solution to this.
Thank you.
What, exactly, is the problem?
Normally, the middle tier application server creates a connection pool. When your code requests a connection, it gets an already open connection from the pool rather than going through the overhead of spawning a new connection to the database. When your code closes a connection, the connection is returned to the pool rather than going through the overhead of physically closing the connection. That means that there will be a reasonable number of connections to the database where the STATUS in V$SESSION is "INACTIVE" at any given point in time. That's perfectly normal.
Even under load, most database connections from a middle tier are "INACTIVE" most of the time. A status of "INACTIVE" merely means that at the instant you ran the query, the session was not executing a SQL statement. Most connections will spend most of their time either sitting in the connection pool waiting for a Java session to open them or waiting on the Java session to do something with the data or waiting on the network to transfer data between the machines.
Are you actually getting an error (i.e. ORA-00020: maximum number of processes exceeded)? Or are you just confused by the number of entries in V$SESSION?

Resources