As descriped in the documentation it seems to be possible to replicate packages (spec and body) beside other database objects. But what will happen if the packages of the remote database (slave) are under permanent access? Is a replication under such circumstance possible too? I didn't find any hint in the documentation.
Many thanks in advance
Related
We have an upcoming migration of our Oracle database to an Exadata server. I want to clarify some issues I have thought of:
Will there be any issues with the code - performance issues? Exadata has another type of optimizer, it doesn’t uses indexes, has a columnar optimizer, if I’m not misleading,
Currently there are some import or export files generated on the database server (accessed via Filezilla). I understand that at Exadata the database server is inaccessible, and I suspect that either:
• we will have to move those files to another server - Oracle knows only FTP (which has ports closed at our client) -> how do we write / read from another server? (as far as I understand, they would like to put all the files on the WAS server)
• or we will need to import the files into the table using the java application and process them from there (and the same with the exported files).
Files that come automatically from other applications can be written to the database server? Or we have the same problems as for the manual part.
We have plenty of database jobs that run KSH scripts on the database server - is there a problem with them? I understand they should also be moved to the WAS server, but I do not know how Oracle will call them from there.
Will there be any problems with Jenkins deployments? Anything changed? Here we save the SQL/PLSQL sources in some XML files, from which the whole application is restored (packages, configuration tables, nomenclatures ...) (with the exception of the working data) (the XML files are read through a procedure from an oracle directory).
If you can think of any other issues concerning this migration, any problems you have encountered during or after the migration to Exadata, please share!
Thank you,
Step by step:
On exadata you are going to have the same optimizer behaviour with some improvements because the exadata may improve full table scan performance thanks to smart full scans. Indeed the exadata is able to avoid retrieving data blocks in fts because it knows in advance they do not contain neeeded data.
In the exadata you can export to external servers DBFS file systems, that might be useful for external tables, imports/exports and so on.
You can write your files on the DBFS you can configure.
You could use your DBFS, if you want the ksh files are accessed from outside your exadata.
Let your oracle directory point to a directory in the DBFS file system where you put your xml files and you are done.
i am working on designing constructure in Greenplum database.
we have many clinets which need to store data for them.
there are two ways to design database constructure. we build one database and different schemas in this database for each clients
or build different databases for each clients, which way is better?
waht is nmore ,we need to migrate databases or schemas from dev environment to environment
Thanks
William
William,
Either way will work. If you are keeping multi-tenants in Greenplum and there is no data sharing, you might be better off keeping them in separate databases - easier for security and for backups. If there is a requirement that they share some common data, then using multiple schemas in one database is the better option.
I am not sure what version of Greenplum you are on, but you should be able to backup a schema from dev and restore it with gprestore using the --redirect option to put it in the database you want it to be in.
Jim McCann
Pivotal
I have my current OBIEE installed in a linux server under oracle directory. Now the current warehouse (ODI and data storage ) has moved to AWS.Can I change any configuration file within this installation to point to the warehouse tables in AWS instead?
I know my articulation of the problem is bit confusing. I will be happy to clarify if needed.
Thanks
Shortcutting your assumed answer:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.Oracle.html#CHAP_GettingStarted.Connecting.Oracle
Creating connection pools: https://docs.oracle.com/cd/E29542_01/bi.1111/e10540/conn_pool.htm#BIEMG1262
I need to have a high availability database system but I can't do it through database cluster or master/slave. I need a jdbc proxy that knows to update multiple data source for a single update statement. I found HA-JDBC project http://ha-jdbc.github.com/ that does that but I was wondering if there are similar or better library than HA-JDBC.
You can see other project around HA features: Sequoia/C-JDBC (http://c-jdbc.ow2.org).
But some links on page are broken ;(
My delayed job has something to do with exporting slightly edited version of most of the tables in the app's database, and while doing so, it is critical that none of the current data is being edited.
Is it possible to lock the entire database while running this delayed job?
More Information:
The database to be exported is in PostgreSQL, Heroku's postgresql database, to be more specific.
The flow is something like (all below should be done automatically by the code):
site would be put in maintenance mode,
freeze then export the database, then
when exporting is complete, re-activate the site back
Given there is not a lot of information with your question, I am going to answer you as best I can.
1) What is the database type and model? Is it a standalone DB like MS Access or Informix SE?
2) If not a standalone engine, does this database support replication. I used to work a lot with MS SQL Server, and replication had implications while the database was live and being edited. That is the implications were whether edited data was replicated. In this case, consult the docs. Is it an option to use replication to preserve the current database?
3) What kind of task is this? It sounds like maintenance. Our Informix SE databases lock when being imported or exported. On the production server, it is my job to make sure no local server applications are trying to access the locked DB, and that our external payments web site cannot interfere while the db is locked.
4) If this is a production site that is not in maintenance mode, then I suggest you probably do not want to lock an entire database.
I am sorry for not answering your question directly, but more information is needed like are you asking if this can be done from the Ruby DB interface on some model of db.