WebSphere solution for writing to file system - websphere

I'm using WebSphere application server and I've database to write my MI data. Now requirement is to write some where even when my database is down or table space exceeds it's limit.
I thought of writing it to a file when db is down but I read some where that it is not suggestible when using WebSphere server as it causes problem in a clustered environment.
Can any one suggest better way to handle this requirement?

Solution is simple.
You should use shared file storage.
It must be available for each cluster member.

Related

Oracle application - migration to Exadata server

We have an upcoming migration of our Oracle database to an Exadata server. I want to clarify some issues I have thought of:
Will there be any issues with the code - performance issues? Exadata has another type of optimizer, it doesn’t uses indexes, has a columnar optimizer, if I’m not misleading,
Currently there are some import or export files generated on the database server (accessed via Filezilla). I understand that at Exadata the database server is inaccessible, and I suspect that either:
• we will have to move those files to another server - Oracle knows only FTP (which has ports closed at our client) -> how do we write / read from another server? (as far as I understand, they would like to put all the files on the WAS server)
• or we will need to import the files into the table using the java application and process them from there (and the same with the exported files).
Files that come automatically from other applications can be written to the database server? Or we have the same problems as for the manual part.
We have plenty of database jobs that run KSH scripts on the database server - is there a problem with them? I understand they should also be moved to the WAS server, but I do not know how Oracle will call them from there.
Will there be any problems with Jenkins deployments? Anything changed? Here we save the SQL/PLSQL sources in some XML files, from which the whole application is restored (packages, configuration tables, nomenclatures ...) (with the exception of the working data) (the XML files are read through a procedure from an oracle directory).
If you can think of any other issues concerning this migration, any problems you have encountered during or after the migration to Exadata, please share!
Thank you,
Step by step:
On exadata you are going to have the same optimizer behaviour with some improvements because the exadata may improve full table scan performance thanks to smart full scans. Indeed the exadata is able to avoid retrieving data blocks in fts because it knows in advance they do not contain neeeded data.
In the exadata you can export to external servers DBFS file systems, that might be useful for external tables, imports/exports and so on.
You can write your files on the DBFS you can configure.
You could use your DBFS, if you want the ksh files are accessed from outside your exadata.
Let your oracle directory point to a directory in the DBFS file system where you put your xml files and you are done.

Is there a way to force read from disk in UcanAccess?

I'm building a Java app that needs to co-exist with a VBasic one. Both consume a single Access database (.mdb)
As sometimes the VB app writes the DB I'd need to programmatically re-read from disk the mdb. Is there a way to do this? is this Jackcess responsibility or Ucanaccess?
I'm using UcanAccess with Spring and Mybatis.
UCanAccess always reloads the data when another process updates the db file

H2 Database multiple connections

I have the following issue:
Two instances of an application on two different systems should share a small database.
The main problem is that both systems can only exchange data through a network-folder.
I don't have the possibilty to setup a database-server somewhere.
Is it possible to place a H2 database on the network-folder and let both instances connect to the database (also concurrently)?
I could connect with both instances to the db using the embedded mode if I disable the file-locking, right?
The instances can perfom either READ or INSERT operations on the db. Do I risk data corruptions using multiple concurrent embedded connections?
As the documentation says; ( http://h2database.com/html/features.html#auto_mixed_mode
)
Multiple processes can access the same database without having to start the server manually. To do that, append ;AUTO_SERVER=TRUE to the database URL. You can use the same database URL independent of whether the database is already open or not. This feature doesn't work with in-memory databases.
// Application 1:
DriverManager.getConnection("jdbc:h2:/data/test;AUTO_SERVER=TRUE");
// Application 2:
DriverManager.getConnection("jdbc:h2:/data/test;AUTO_SERVER=TRUE");
From H2 documentation:
It is also possible to open the database without file locking; in this
case it is up to the application to protect the database files.
Failing to do so will result in a corrupted database.
I think that if your application use always the same configuration (shared file database on network folder), you need to create an application layer that manages concurrency

Securing Oracle External table data file

I've just learned one of Oracle features: External table. But when I use this external table in my application, I get a problem and wonder how to solve it.
The problem is: the security of the data file of external table (It's in text format).
How can I secure this data file effectively?
The target environment is: Red hat Linux enterprise 5.4; Oracle 10g.
Because of that environment, I cannot use Oracle DBFS to secure this file. Should I save the external data file in LOB data type in an independent database? Would you suggest me any other solution for my problem?
Make sure only the operating system user that starts the Oracle service/database (typically "oracle") can read the input files for the external services.
Then no other user will be able to mess around with them.

Locking entire database while running a delayed job

My delayed job has something to do with exporting slightly edited version of most of the tables in the app's database, and while doing so, it is critical that none of the current data is being edited.
Is it possible to lock the entire database while running this delayed job?
More Information:
The database to be exported is in PostgreSQL, Heroku's postgresql database, to be more specific.
The flow is something like (all below should be done automatically by the code):
site would be put in maintenance mode,
freeze then export the database, then
when exporting is complete, re-activate the site back
Given there is not a lot of information with your question, I am going to answer you as best I can.
1) What is the database type and model? Is it a standalone DB like MS Access or Informix SE?
2) If not a standalone engine, does this database support replication. I used to work a lot with MS SQL Server, and replication had implications while the database was live and being edited. That is the implications were whether edited data was replicated. In this case, consult the docs. Is it an option to use replication to preserve the current database?
3) What kind of task is this? It sounds like maintenance. Our Informix SE databases lock when being imported or exported. On the production server, it is my job to make sure no local server applications are trying to access the locked DB, and that our external payments web site cannot interfere while the db is locked.
4) If this is a production site that is not in maintenance mode, then I suggest you probably do not want to lock an entire database.
I am sorry for not answering your question directly, but more information is needed like are you asking if this can be done from the Ruby DB interface on some model of db.

Resources