How to check if a database table exists in TYPO3 using doctrine? - doctrine

Although the TYPO3 core takes good care of having all tables, there might be situations where you need to check if a table exists.
The situation at hand is an Update Wizard which interacts with another extension, where the other extension has a migration changing table names.
So: how to check if a table exists in current TYPO3, thus using doctrine and possibly even multiple database connections

At least for 10LTS, 11LTS and (as of now probably 12LTS too)
return GeneralUtility::makeInstance(ConnectionPool::class)
->getConnectionForTable($tablename)
->getSchemaManager()
->tablesExist([$tablename]);
This works because if no connection for the table is defined because the table doesn't exist, still the default connection is used and a check can be done there.

Related

Don't Leave Me Table

I am trying to create a rake task that will rollback the database but keep one table. I would guess that the easiest way to do that would be to store that table (maybe in seeds.rb) then re-insert it. My ORM is activerecord and my database is postgresql.
If you are needing to do the rollback solely on your development environment, you could do the rollback, edit the migration file to contain only the one table you want to keep, and then re-migrate. (Don't forget you may need to rollback both dev and test environments).
If you're in a team that already has performed this migration, you're probably better off not rolling back. Instead you could create a new migration that undoes all but the one table's changes.
Do you mean drop all tables except one?
You can list Postgres tables via tables.
Different ways to list tables are here.
Then you could use drop_table (note cascade).

Cassandra Best Practice on edits: delete & re-insert vs. update?

I am new to Cassandra. I am looking at many examples online. Here is one from JHipster Cassandra examples on GitHub:
https://gist.github.com/jdubois/c3d3bedb869466731316
The repository save(user) method does a read (to look for existence) then a delete and re-insert of the existing user across all the denormalized tables whenever the user data changed.
Is this best practice?
Is this only because of how the data model for this sample is designed?
Is this sample's design a result of twisting a POJO framework into a NoSQL database design?
When would I want to just do a update in Cassandra? It supports updates at the field-level, so it seems like that would be preferred.
First of all, the delete operations should be part of the batch for more robust error handling. But it looks like there are also some concurrency issues with the code. It will update the user based on the current user value read before. It's not save to assume this will still be the latest value while save() is actually executed. It will also just overwrite any keys in the lookup table that might be in use for a different user at that point. E.g. the login could already exist for another user while executing insertByLoginStmt.
It is not necessary to delete a row before inserting a new one.
But if you are replacing rows and new columns are different from existing columns then you need to delete all existing columns and insert new columns. Or insert new and delete old, does not matter if happens in batch.

Update H2Database schema with ORMLite

I am using H2Database With ORMLite. we have 60 tables all created with ORMLite "create if not exists", Now we are going to provide a major release and requirement is to update old version database. But I need to know how to do this with ormLite as in new version some of Tables will be new and some is existing old tables with some modifications e.g we have an table of job in previous version db, in this release we added 2 more columns and change the datatype of one column. any suggestions. I have seen some other posts regarding OrmLite for Android SqlLite. How can this approach be used for other db. e.g Like this post
ORMLite update of the database
But I need to know how to do this with ormLite as in new version some of Tables will be new and some is existing old tables with some modifications e.g we have an table of job in previous version db, in this release we added 2 more columns and change the datatype of one column.
I'm not sure there is any easy answer here. ORMLite doesn't directly provide any magic capabilities to make the migration of data any easier. Here are some thoughts however:
You will need to use some sort of SQL logic to determine whether your application has the "old" or "new" schema installed. You could use raw SQL to look for the existance of particular tables or columns. Might be a good idea going forward to store a meta table with database version which Android gets for free.
You can create new and old versions of each of your entities (OldAccount versus Account) and map them both to the same table with the #DatabaseTable(tableName = "accounts"). Then you can read the old entities using the oldAccountDao.iterator(), convert them to new entities and (as long as you aren't mucking with the primary key) update them using the new accountDao.update(...).
You can certain come up with a series of SQL statements that will need to be performed in the proper order to change the schema. Then call the dao.exectuteRaw(...) with them in order.
Obviously the new entities will just be created.
You might want to consider dumping a backup file of all tables somewhere before the conversion process and telling the user about it in case there is some failure so your users could revert and run the old version of your application.
Hopefully something here is helpful.

Database Project Insists on "Rebuilding" Table on Deployment for Dropped Columns

So I have a VS2010 Database Project that I am deploying, with a few schema changes. I have one table in particular that the VSDBCMD insists on "rebuilding" i.e. rename->create->copy->drop
The only changes for this table are dropping some columns, which could be handled by the simply I dunno, dropping the columns. Normally I wouldn't mind, except this particular table is called "Attachments" and weighs in at 15 gigs or so. Which takes a long time, locks up the database and fails locally, as I don't have 15+ gigs free, and times out remotely in our testing environment.
Can anyone direct me to the rules VSDBCMD follows for changing the schema when it deploys?
Or perhaps you have experienced similar issues and have a suggestion?
Thanks!
VSDBCMD just 'likes' rebuilding tables too often, and I don't have the 'magic vsdbcmd manual' for when it chooses to rebuild a table unfortunately, but I don't trust the output of VSDBCMD on a production database anyway without manual checking first anyway.
There's a setting in the 'dbname.sqldeployment' file that allows the setting 'IgnoreColumnOrder' that might help prevent rebuilding the table (maybe it's triggering the rebuild because the column index has changed).
In your case I would just run a manually created script on your DB.
Heck, writing 'alter table Attachments drop column uselessData' would've probably cost you 10% of the time you put into asking this question in the first place :)

Control which columns become primary keys with Microsoft Access ODBC link to Oracle

When you create a Microsoft Access 2003 link to an Oracle table using Oracle's ODBC driver, you are sometimes asked to state which columns are the primary key(s).
I would like to know how to change that initial assignment, or even how to get Access/ODBC to forget the assignment. In my limited testing I wonder if the assignment isn't cached by the ODBC driver itself.
The columns I initial chose are not correct.
Update: I never did get a full answer on this one, deleting the links then restoring them didn't work. I think it's an obscure bug. I've moved on and haven't had to worry about this oddity since.
You must delete the link to the table and create a new one. When a table is linked all the connection info about the table's path, structure (including primary key), permissions, passwords and statistics are stored in the Access db. If any of those items change in the linked table, refreshing links won't automatically update it on the Access side because Access continues to use the previously stored info. You must delete or drop the linked table and recreate the link, storing the current connection information.
Don't know for sure if this next bit also applies to odbc linked tables, but I suspect it does. For Jet tables, it's a good idea to periodically delete all links and recreate them to improve query performance, because if a linked table's statistics are made on a table with few records, once that table is filled with many more records, new statistics will tell Jet's optimizer whether using indexes or a full table scan would be the better course of action when running a query.
It is not possible to delete the link and then relink?

Resources