Now that I have registered my sync services schema, how do I update it to my new model version?
I just found out from the docs..
¨ is highly recommended that you register the schema periodically even if it does not change—for example, register the schema each time your application launches. However, if a schema changes, update it with caution because changing a schema may cause records to be deleted and cause some clients to slow sync.¨
We need just to re-register the schema..
Related
I'm working on a simple task of adding a new table to an existing SQL DB and wiring it into a SpringBoot API with SpringData.
I would typically start by defining the DB table directly, creating PK and FK, etc and then creating the Java bean that represents it, but am curious about using the SpringData initialization feature.
I am wondering when and where Spring Data + JPAs schema generation and DB initialization may be useful. There are many tutorials on how it can be implemented, but when and why are not as clear to me.
For example:
Should I convert my existing lower environment DBs (hand coded) to initialized automatically? If so, by dropping the existing tables and allowing the App to execute DDL?
Should this feature be relied on at all in production envrionment?
Should generation or initialization be run only once? Some tutorial mention this process running continually, but why would you choose to lose data that often?
What is the purpose of the drop-and-create jpa action? Why would
you ever want to drop tables? How are things like UAT test data handled?
My two cents on these topics:
Most people may say that you should not rely on automated database creation because it is a core concept of your application and you might want to take over the task so that you can lnowmfor sure what is really happening. I tend to agree with them. Unless it is a POC os something not production critical, I would prefer to define the database details myself.
In my opinion no.
This might be ok on environments that are non-productive. Or on early and exploratory developments. Definetely not on production.
On a POC or on early and exploratory developments this is ok. In any other case I see this being useful. Test data might also be part of the initial setup of the database. Spring allows you to do that by defining an SQL script inserting data to the database on startup.
Bottomline in my opinion you should not rely on this feature on Production. Instead you might want to take a look at liquibase or flyway (nice article comparing both https://dzone.com/articles/flyway-vs-liquibase), which are fully fledged database migration tools on which you can rely even on production.
My opinion in short:
No, don't rely on Auto DDL. It can be a handy feature in development but should never be used in production. And be careful, it will change your database whenever you change something on your entities.
But, and this is why I answer, there is a possibility to have hibernate write the SQL in a file instead of executing it. This gives you the ability to make use of the feature but still control how your database is changed. I frequently use this to generate scripts I then use as blueprint for my own liquibase migration scripts.
This way you can initially implement an entity in the code and run the application, which generates the hibernate sql file containing the create table statement for your newly added entity. Now you don't have to write all those column names and types for the database table yourself.
To achieve this, add following properties to your application.properties:
spring.jpa.hibernate.ddl-auto=none
spring.jpa.properties.javax.persistence.schema-generation.scripts.create-target=build/generated_scripts/hibernate_schema.sql
spring.jpa.properties.javax.persistence.schema-generation.scripts.action=create
This will generate the SQL script in your project folder within build/generated_scripts/hibernate_schema.sql
I know this is not exactly what you were asking for but I thought this could be a nice hint on how to use Auto DDL in a safer way.
Yesterday, I made some migrations to a website that I had to rollback. Luckily, I had a backup of the database, and was able to restore the lead database to a "good" state using Heroku's pg:backup:restore facility.
The lead database is followed by another database. Does the follower also get "restored" when I restore the lead? Will it contain the same data as the leader?
You can't rollback an existing database. When you use the rollback functionality you're actually forking the targeted database and thereby creating an entirely new database without any followers. If you need to do this operation for your primary database, you'll need to put the application maintenance mode before creating the rollback database, promote it to primary and then recreate the any followers.
I have a JBoss 6 application running both EJB and Spring code (some legacy involved in this decision). It should communicate to Oracle and PostgreSQL databases, on demand.
JPA is the way DB operations are done, no direct JDBC is involved.
I would like to do the following: without altering the business logic, to be able to "silence" database updates/deletes from my application, without breaking the flow with any exceptions.
My current thoughts are:
Set the JDBC driver as read-only from the deployment descriptor - this works only with PostgreSQL (Oracle driver does not support this)
Make a read-only user on the RDBMS level - it might fill me up with errors
Make all transactions rollback instead of committing - is this possible?
Make entity manager never persist anything - set the FlushMode to MANUAL and make sure flush() never gets called - but commit() still flushes everything.
Is there any other concise approach to this?
If you want to make sure the application works as on production, work on a replica of the Database. Use a scheduler every night that overwrites the replica DB.
My request also includes the need for this behavior to be activated or deactivated at runtime.
The solution I found (currently for a proof-of-concept) is:
create a new user, grant him rights on the default schema's tables;
with this user create views for each of the tables, with the same name (without the schema prefix);
create a trigger for each view that does nothing on insert, update, or delete, using INSTEAD OF;
create a data source and persistence unit for this user;
inject two entity managers at runtime, use the one that is needed;
Thanks for your help!
In my project, I am using Oracle Database and SubSonic for DAL. I have a problem with SubSonic and Oracle Schema, that is:
When developing, I used a schema DEV in Oracle Database and generate DAL using SubSonic.
After that when release to customer, he used a new schema TEST in Oracle Database and changed the connection string in app.config to connect to Oracle. The error will appear, that is “Table or View does not exist”. I found this error and see that the schema of tables is still DEV.
I do not want re-generate DAL after change schema and when released to the customer. Please help me.
Firstly, your schema should not be DEV. DEV is a user or role.
Your schema name should be related to the data content (eg ACCOUNTS or SALES)
Secondly, consider whether you or the customer is going to decide the schema name. Say you have a product called FLINTSTONE. You may decide that the schema name should be FLINTSTONE. However your customer may want to run two instances of your product (eg one for local sales, the other for international) and use the same database. So they want FS_LOCAL and FS_INTER as the schema names. Is that option a feature of your product ?
Next, decide if your application should connect as the schema owner. There are good security reasons for NOT doing that. For example, the schema owner has privileges to drop tables, which is generally something the application doesn't do and thus, on the principle of least privilege, is something your application shouldn't have privileges to do.
Generally I would recommend some config parameter for the application for the schema name, and after connecting to the database, the app should do an "ALTER SESSION SET CURRENT_SCHEMA = 'whatever was it the config file'". The application database user would need the appropriate insert/update/delete/select/execute privileges on the objects in the application schema. If the application can't do that, you can have a LOGON trigger in the database.
Gary is correct in not using DEV as a schema on your own machine. In using Oracle we typically set up the schema as what the client is going to name their schema. This however does not fix your issue. What you need to do is create a global alias in Oracle that maps say DEV to CLIENTSCHEMA. You should still rename the schema on your machine but this will allow your schema to differ from your clients.
We have restored an old ms CRM database over a newer version. But when I try and add users which were already existed in newer version I get an error.
If I delete the users from our of active directory and then try to add them to CRM it works fine.
Is it possible that CRM is storing user information in the MSCRM_CONFIG. And can this be removed in a supported way?
Have a look at the SystemUser in the MSCRM_CONFIG table, I think i need to remove the users from this table. but I can't do a delete statement as it's not supported. :)
Did you restore this database using the Deployment Manager tool or simply by doing a SQL Restore? Doing this directly from SQL would cause issues. You'll need to delete the organization in the deployment manager and then delete the database in SQL. Then you should attach the database and recreate the organization from the deployment manager, pointing it to the existing database.
Restoring just the org DB can lead to issues as some user info is stored in the config DB as well. In fact, there are entries in there mapping the user to the org (SystemUserOrganizations), so when you restore the Org DB, this mapping is now out of date.
You would need to either go the Delete/Import route or manually do some unsupported cleansing of the Config DB Tables.