Tibco JDBC Update dry run - tibco

Is it possible to have a dry run in Tibco for the JDBC Update activities? Meaning that I want to run those activities, but not actually update the database.
Even running in test mode if it's possible will be good.

The only option I see is having a copy of the targeted tables in a separe schema, duplicate the data, and temporary align the JDBC connection of you activity on this secondary, temporary/test database.
Since you can use global variables, no code is changed between test and delivery (a typical goal), and you can compare both DB tables to see if the WOULD HAVE ran well...

I think I found a way. I haven't tested it yet, but theoretically it should work.
The solution is to install P6Spy and to create a custom module that will throw an exception when trying to execute an INSERT/UPDATE.

You could wrap the activity into a transaction group and rollback whenever you only want to test the statement. Otherwise just exit the group normally so the data gets commited.

Related

When and how should SpringData + JPA schema and DB initialization be used?

I'm working on a simple task of adding a new table to an existing SQL DB and wiring it into a SpringBoot API with SpringData.
I would typically start by defining the DB table directly, creating PK and FK, etc and then creating the Java bean that represents it, but am curious about using the SpringData initialization feature.
I am wondering when and where Spring Data + JPAs schema generation and DB initialization may be useful. There are many tutorials on how it can be implemented, but when and why are not as clear to me.
For example:
Should I convert my existing lower environment DBs (hand coded) to initialized automatically? If so, by dropping the existing tables and allowing the App to execute DDL?
Should this feature be relied on at all in production envrionment?
Should generation or initialization be run only once? Some tutorial mention this process running continually, but why would you choose to lose data that often?
What is the purpose of the drop-and-create jpa action? Why would
you ever want to drop tables? How are things like UAT test data handled?
My two cents on these topics:
Most people may say that you should not rely on automated database creation because it is a core concept of your application and you might want to take over the task so that you can lnowmfor sure what is really happening. I tend to agree with them. Unless it is a POC os something not production critical, I would prefer to define the database details myself.
In my opinion no.
This might be ok on environments that are non-productive. Or on early and exploratory developments. Definetely not on production.
On a POC or on early and exploratory developments this is ok. In any other case I see this being useful. Test data might also be part of the initial setup of the database. Spring allows you to do that by defining an SQL script inserting data to the database on startup.
Bottomline in my opinion you should not rely on this feature on Production. Instead you might want to take a look at liquibase or flyway (nice article comparing both https://dzone.com/articles/flyway-vs-liquibase), which are fully fledged database migration tools on which you can rely even on production.
My opinion in short:
No, don't rely on Auto DDL. It can be a handy feature in development but should never be used in production. And be careful, it will change your database whenever you change something on your entities.
But, and this is why I answer, there is a possibility to have hibernate write the SQL in a file instead of executing it. This gives you the ability to make use of the feature but still control how your database is changed. I frequently use this to generate scripts I then use as blueprint for my own liquibase migration scripts.
This way you can initially implement an entity in the code and run the application, which generates the hibernate sql file containing the create table statement for your newly added entity. Now you don't have to write all those column names and types for the database table yourself.
To achieve this, add following properties to your application.properties:
spring.jpa.hibernate.ddl-auto=none
spring.jpa.properties.javax.persistence.schema-generation.scripts.create-target=build/generated_scripts/hibernate_schema.sql
spring.jpa.properties.javax.persistence.schema-generation.scripts.action=create
This will generate the SQL script in your project folder within build/generated_scripts/hibernate_schema.sql
I know this is not exactly what you were asking for but I thought this could be a nice hint on how to use Auto DDL in a safer way.

Why do I have to disable Javers' schema management in order to initialize my application?

In my Spring-Boot project when:
javers.sqlSchemaManagementEnabled=true
The Javers tables are created on the first execution (when the tables do not exist on the database) and the code runs as expected, however from the second execution onwards an exception is thrown describing that the tables cannot be created because them already exist. This is the situation that I cannot understand, isn't Javers supposed to know that the tables already exist and do not attempt to create the tables?
javers.sqlSchemaManagementEnabled=false
If the tables where already created on the database, manually or executing the application with this option as 'true' at least once, the application executes as expected.
What am I supposed to do?
Is there something wrong with my Spring-Boot configuration? The application was supposed to run with 'sqlSchemaManagementEnabled=true' even with the tables already created?
I expected is to leave the 'sqlSchemaManagementEnabled=false' and create the tables manually?
I had the same problem, when using other than public schema in PostgreSQL.
I solved it by switching to public schema, now it works correctly with javers.sqlSchemaManagementEnabled=true.
For other schemas, you should somehow specify the schema name in org.javers.repository.sql.schema.TableNameProvider
If javers.sqlSchemaManagementEnabled=true, Javers creates SQL tables if they do not exists already.
It's checked here:
https://github.com/javers/javers/blob/master/javers-persistence-sql/src/main/java/org/javers/repository/sql/schema/JaversSchemaManager.java#L215
It's hard to say why it doesn't work in your case, try to debug this code using the latest Javers version.

Make a J2EE application avoid updating the DB

I have a JBoss 6 application running both EJB and Spring code (some legacy involved in this decision). It should communicate to Oracle and PostgreSQL databases, on demand.
JPA is the way DB operations are done, no direct JDBC is involved.
I would like to do the following: without altering the business logic, to be able to "silence" database updates/deletes from my application, without breaking the flow with any exceptions.
My current thoughts are:
Set the JDBC driver as read-only from the deployment descriptor - this works only with PostgreSQL (Oracle driver does not support this)
Make a read-only user on the RDBMS level - it might fill me up with errors
Make all transactions rollback instead of committing - is this possible?
Make entity manager never persist anything - set the FlushMode to MANUAL and make sure flush() never gets called - but commit() still flushes everything.
Is there any other concise approach to this?
If you want to make sure the application works as on production, work on a replica of the Database. Use a scheduler every night that overwrites the replica DB.
My request also includes the need for this behavior to be activated or deactivated at runtime.
The solution I found (currently for a proof-of-concept) is:
create a new user, grant him rights on the default schema's tables;
with this user create views for each of the tables, with the same name (without the schema prefix);
create a trigger for each view that does nothing on insert, update, or delete, using INSTEAD OF;
create a data source and persistence unit for this user;
inject two entity managers at runtime, use the one that is needed;
Thanks for your help!

Nhibernate Nunit - clear database between testcases

We have a rather extensive test suite that takes forever to execute.
After each test has completed, the database (MSSQL) needs to be emptied so it is fresh for the next testcase.
The way we do this is by temporarily removing all foreign keys, TRUNCATE'ing all tables, and re-adding the FKs.
This step takes somewhere between 2-3 seconds, according to NHProfiler. All the time is seemingly spent with the FK operations.
Our current method is clearly not optimal, but which way should we go to improve the performance ? The number of elements which are actually deleted from the DB is completely insignificant compared to the number of operations for the FK removal/additions.
Using an in-memory SQLite database is not an option, as the code under test uses MSSQL specific operations.
You could wrap everything in a transaction and in the end just rollback everything. That's how I do it. It allows also to run tests in parallel.
what about using SQL Server Compact, create the database from the mapping files using nhibernate schema create and load the data for each test. if you are talking about a trivial amount data.
Have a look at this blog post - Using SQL Server Compact Edition for Unit testing
Alternativly you could use Fluent Migrator to create the database schema and load the data for each test.
Why are you even using a DB in your tests? Surely you should be mocking the persistence mechanism? Unless you're actually trying to test that part of the functionality you're wasting time and resources actually inserting/updating/deleting data.
The fact that your tests rely on ms sql specifics and returned data hints at the possibility that your architecture needs looking at.
I'm not meaning to sound rude here - I'm just surprised no one else has picked you up on this.
w://
There are a couple of things that I've done in the past to help speed up database integration tests. First thing I did was I ended up having a sql script that actually creates the entire database from scratch. This can be easily accomplished using a tool like Red-Gate SQL Compare against a blank database.
Second I created a script that removed all of the database objects from an existing database.
Then I needed a script that populated the database with test data. Again, simple to create using Red-Gate tools. You don't need/want a ton of data here, just enough to cover your test cases.
With those items in place, I created one test class with all of my read-only operations in there. In the init of that class, i cleared a local sql server express instance, ran the create script and then ran the populate script. This ensured the database was initialized correctly for all of the read-only tests.
For tests that actually manipulate the database, we just did the same routing as above except that we did it on test init as opposed to class init.
Obviously the more database manipulation tests you have, the longer it will take to run all of your tests. If it becomes unruly, you should look at categorizing your tests and only running what is necessary locally and running the full suite on a continuous integration server.

How do you read the Oracle transaction log

Instead of placing triggers on tables everywhere in an Oracle database, is there a Java API that I can use to read transactions off the Oracle transaction log?
My purpose is to be able to detect transactions going into a proprietary(vendor) database and react accordingly. We can't modify the database so that we do not void our maintenance contract.
Please help!
There is LogMiner which is SQL based (and so you could access through JDBC).
http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/logminer.htm#sthref1875
Or you can look at Oracle Streams which reads the logs and generates 'logical change messages' into a queue from the log contents.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14229/strms_over.htm#i1006309
If you are running in *nix, there is a perl module that you could use to tail the file; then break down the lines for yourself.

Resources