Which tables in Moodle are safe to be truncated? - caching

For the purposes of dumping a Moodle db for backups, which tables can be exported as structure only (that is, they are cache tables safe to be truncated) ?
I don't want to export cached data as the backups are to be kept in a revision control system.
Thank you.

Related

Oracle application - migration to Exadata server

We have an upcoming migration of our Oracle database to an Exadata server. I want to clarify some issues I have thought of:
Will there be any issues with the code - performance issues? Exadata has another type of optimizer, it doesn’t uses indexes, has a columnar optimizer, if I’m not misleading,
Currently there are some import or export files generated on the database server (accessed via Filezilla). I understand that at Exadata the database server is inaccessible, and I suspect that either:
• we will have to move those files to another server - Oracle knows only FTP (which has ports closed at our client) -> how do we write / read from another server? (as far as I understand, they would like to put all the files on the WAS server)
• or we will need to import the files into the table using the java application and process them from there (and the same with the exported files).
Files that come automatically from other applications can be written to the database server? Or we have the same problems as for the manual part.
We have plenty of database jobs that run KSH scripts on the database server - is there a problem with them? I understand they should also be moved to the WAS server, but I do not know how Oracle will call them from there.
Will there be any problems with Jenkins deployments? Anything changed? Here we save the SQL/PLSQL sources in some XML files, from which the whole application is restored (packages, configuration tables, nomenclatures ...) (with the exception of the working data) (the XML files are read through a procedure from an oracle directory).
If you can think of any other issues concerning this migration, any problems you have encountered during or after the migration to Exadata, please share!
Thank you,
Step by step:
On exadata you are going to have the same optimizer behaviour with some improvements because the exadata may improve full table scan performance thanks to smart full scans. Indeed the exadata is able to avoid retrieving data blocks in fts because it knows in advance they do not contain neeeded data.
In the exadata you can export to external servers DBFS file systems, that might be useful for external tables, imports/exports and so on.
You can write your files on the DBFS you can configure.
You could use your DBFS, if you want the ksh files are accessed from outside your exadata.
Let your oracle directory point to a directory in the DBFS file system where you put your xml files and you are done.

Why so slow returning data from Oracle external tables?

We are an ETL shop and make heavy use of external tables. Typically these tables are queried to populated staging tables. I am surprised at the time it takes to for queries to return data from the external tables.
Typically there is around a 15 second delay before any result is returned. This is true even in the cases when the data file contains no data and when the data file does not exist. The delay doesn't seem related to the number of rows in the file.
I am logging into the database server itself, on which the external table data files are located.
Is this expected behaviour?
File system operations (ls, vim) at least on smaller files happen with no delay.
All files on local disk.
Oracle 12.1.
Oracle Linux Server release 6.6
I would recommend reviewing or looking into release Oracle 12.2 notes. There was a Patch for both the Big Data Appliance Firmware (22911748) for Exadata and a fix made in 12.2.
It addresses a view that is specific to the access to external tables. It's possible that you are impacted by this view. The view name is LOADER_DIR_OBJS, which is used to query the directory that external tables point to.
Our customers are running into very similar issues, and Oracle recommended installing the 12.2 release which contains the patch.
So, we are currently testing the 12.2 release. Anytime an external table is read, it has to have access to the LOADER_DIR_OBJS system view. Typically, the poor performance comes from this view, which accesses the SYS.OBJ$ and SYS.X$DIR system object because query plan is not optimal. Some people have found work arounds. (See Oracle Workaround Document ID 2034938.1 to see if it applies to you).

monetdb: Online Schema Alters

Does MonetDB support online schema changes? For example adding/changing a column on the fly while the tables are loaded in memory. Many in-memory databases have to restarted to get the schema changes reflected. So, I was wondering if MonetDB took care of this issue.
Yes. MonetDB supports SQL-99 which includes DDL statements (that are immediately reflected in the schema).

Locking entire database while running a delayed job

My delayed job has something to do with exporting slightly edited version of most of the tables in the app's database, and while doing so, it is critical that none of the current data is being edited.
Is it possible to lock the entire database while running this delayed job?
More Information:
The database to be exported is in PostgreSQL, Heroku's postgresql database, to be more specific.
The flow is something like (all below should be done automatically by the code):
site would be put in maintenance mode,
freeze then export the database, then
when exporting is complete, re-activate the site back
Given there is not a lot of information with your question, I am going to answer you as best I can.
1) What is the database type and model? Is it a standalone DB like MS Access or Informix SE?
2) If not a standalone engine, does this database support replication. I used to work a lot with MS SQL Server, and replication had implications while the database was live and being edited. That is the implications were whether edited data was replicated. In this case, consult the docs. Is it an option to use replication to preserve the current database?
3) What kind of task is this? It sounds like maintenance. Our Informix SE databases lock when being imported or exported. On the production server, it is my job to make sure no local server applications are trying to access the locked DB, and that our external payments web site cannot interfere while the db is locked.
4) If this is a production site that is not in maintenance mode, then I suggest you probably do not want to lock an entire database.
I am sorry for not answering your question directly, but more information is needed like are you asking if this can be done from the Ruby DB interface on some model of db.

how to use exp command to export Oracle DB with files in different disk location

we get problem, while trying to export Oracle DB. OS: CentOS ~ 5.2 DB: Oracle 10g.
Exp command exports db files only in location:
/home/oracle/OraHome_1/oradata/master/xxx.dbf
, but tool can't export files in different location (we know about this files after getting trace) like this:
'/disk1/dblog06.dbf',
'/home/disk2/system01.dbf',
Please, advice me, how to get dump file. or buckup it.
Thanks.
You appear to have misunderstood what exp does, and particularly what the file parameter is for. The file is the output dump file, normally given a .dmp extension. Export takes data out of the database instance, it does not work under the hood on the datafiles - you have to tell it which data you want (full, user, tables, or tablespaces) and where to put it, not where it comes from.
If you really did try to exp file=/home/disk2/system01.dbf then what you actually asked it to do was trash your database; you're lucky that it did not overwrite the datafile and cause a catastrophic failure. Oracle seems to have saved you from yourself there, though possibly only thanks to having exclusive locks on the files at the time.
You need to read up on how it works and see if it actually does what you want - as APC notes it's not a backup tool. Looks at the Oracle documentation for your version, or somewhere like http://www.orafaq.com/wiki/Import_Export_FAQ, and also look at using data pump instead of exp.
I am not sure if that is the question, but the exp command will export database objects according to their logical schema (user name, table name). It does not matter which physical database file the data is coming from.
exp works through an Oracle instance, which needs to have mounted the datafiles.
Are these other files part of the Oracle database? Maybe another database? You need to find out which Oracle server uses them, and then run exp against that instance.
EXPORT is not a backup tool. It is meant for transferring data from one database to another, or perhaps from one schema to another.
If you want to recover your data in the event of a database crash or corruption then you need to use the appropriate tool. There are OS solutions to this, but Oracle comes with a sophisticated backup and recovery tool: RMAN. Find out more.

Resources