How to avoid empty strings in db? - spring

I have a REST API written in Spring that inserts data into a PostgreSQL database. I would like to avoid empty strings in the database -- and let empty fields be null instead.
Is there anything meaningful I can do in the database side?
For instance, I tried this and while the syntax works from the console, it won't work with named parameters when used by JdbcTemplate in Spring.
insert
...
nullif(trim(:value), '')
Or perhaps I should just clean the data programmatically?
Edit: To clarify, I am asking about the concepts, not about the specific way of achieving the goal provided as an example, or the resulting error message.

Related

How to fetch the data from database using spring boot without mapping

I have a database and in that database there are many tables of data. I want to fetch the data from any one of those tables by entering a query from the front-end application. I'm not doing any manipulation to the data, doing just retrieving the data from database.
Also, mapping the data requires writing so many entity or POJO classes, so I don't want to map the data to any object. How can I achieve this?
In this case, assuming the mapping of tables if not relevant, you don't need to use JPA/Hibernate at all.
You can use an old, battle tested jdbc template that can execute a query of your choice (that you'll pass from client), will serialize the response to JSONObject and return it as a response in your controller.
The client side will be responsible to rendering the result.
You might also query the database metadata to obtain the information about column names, types, etc. so that the client side will also get this information and will be able to show the results in a more convenient / "advanced" way.
Beware of security implications, though. Basically it means that the client will be able to delete all the records from the database by a simple query and you won't be able to avoid it :)

REST client that can consume any REST API run-time and persist the data creating new tables dynamically

I'm looking for a solution with Spring / camel to consume multiple REST services during runtime and create tables to store the data from REST API and compare the data dynamically. I don't know the schema for JSON API in advance to generate the JAVA client classes to create JPA persistent entity classes during run time.
You'll need to think through this differently. Id forget about Java class POJOs that you don't have and can't create since the class structure isn't known in advance. So anything with POJO->Entity binding would be pretty useless.
One solution is to simply parse the xml or json body manually with en event-based parser (like SAX for XML) and simply build an SQL create string as you go through the document. Your field and table names would correspond to the tags in the document. Without access to an XSD or other structure description, no meta data is available for field lengths or types. Make everything really long VARCHAR? Also perhaps an XML or other kind of database might suite your problem domain better. In any case, you could include such a thing right in your Camel route as a Processor that will process the body and create the necessary tables if they don't already exist. You could even alter a table for lengths in the process when you have a field value that is longer than what's currently defined.

liquibase 'generateChangelog' generates wrong schema (bad data)

After running 'generateChangelog' on an Oracle database, the changelogFile has wrong type (or even better, simply bad value) for some fields, independently of the used driver.
More closer, some of the RAW columns are translated to STRING (it sounds okay), but values like "E01005C6842100020200000E10000000" are translated to "[B#433defed". Which seems to be some blob like entity. Also, these are the only data related differences between the original database content and backup.
When I try to restore the DB by 'update', these columns show problems "Unexpected error running Liquibase: *****: invalid hex number".
Is there any way forcing liquibase to save the problem columns "as-is", or anything else to overcome this situation? Or is it a bug?
I think more information is needed to be able to diagnose this. Ideally, if you suspect something may be a bug, you provide three things:
what steps you took (this would include the versions of things being used, relevant configuration, commands issued, etc.)
what the actual results were
what the expected results were
Right now we have some idea of 1 (ran generateChangelog on Oracle, then tried to run update) but we are missing things like what the structure of the Oracle database was, what version of Oracle/Liquibase, and what was the actual command issued. We have some idea of the actual results (columns that are of type RAW in Oracle are converted to STRING in the changelog, and it may be also converting the data in those columns to different values than you expect) and some idea of the expected results (you expected the RAW data to be saved in the changelog and then be able to re-deploy that change).
That being said, using Liquibase to back up and restore a database (especially one that has columns of type RAW/CLOB/BLOB) is probably not a good idea.
Liquibase is primarily aimed at helping manage change to the structure of a database and not so much with the data contained within it.

How to save spring integration message header into database

What would be the easiest and best solution for saving spring header information into database?
I know outbound component that's not the question.
The question is how I can do it without specify-ing header elements in the sql one by one?
Are there any builtin solution for that problem which could adopt the changes when new header elements are added to the message header?
To be honest, there is no such a solution, because with RDBMS we have schema bottleneck.
Even if we could do something like this:
insert into headers value (:headers)
We have to specify any new column in the target DB table.
That's why any NOSQL DB is better for that case. E.g. MongoDb can simply accepts whole Message and stores it to the Document with all headers.
Even if we can Spring Integration scenario make with JPA adapters just to use <jpa:outbound-channel-adapter>, where the message payload is an entity for our header, we need to add a new property to the entity to allow JPA to get deal with them.
However, I think that the new columns in DB isn't en issue for your, and from other side MessageHeader is a Map, so you can iterate over its entries and build INSERT INTO... statement at runtime for each message.
Hope I haven't missed something...

converting J2EE App from Sql to Oracle - suggestions with effecient approach

We have a J2EE app built on Struts2+spring+iBatis; not all DAO's use iBatis...some code still uses the old JDBC approach of interacting with Database. All our DAO's call Stored Procedures, we do not have any inline SQL. Since Oracle Stored Procedures return cursors, we have to drastically change our code.
It is fairly easy for us to convert current iBatis mappings (in sql) to oracle (used a groovy script to do this) also it is easy to convert Java code that was calling old mappings that were in sql.
Our problem is to convert the old DAO's that still use JDBC approach. Since we will have to modify them anyways (because we are now using oracle) we are thinking about converting them to iBatis mappings. is this a good approach? This will be a huge effort from our side...
what do you think will be the best approach to tackle this huge effort?
should we just get to work and start converting each method in every DAO
should we try to make some small script that looks at each method, parses out relevant information and makes iBatis mappings from that.
for maintenance and seperation purpose should we have 1 iBatis mapping for each DAO
I appologize if the question is vague but am just looking for someone who has gone through this type of thing before and has some pointers or 'lessons learned'.
The first thing you should do is cover your DAO layer in tests. This way you'll know if you broke something during the conversion. If you are moving a stored procedure from one DBMS to Oracle, you should also write tests for that using a framework like DbUnit.
You should have a TEST DB instance populated with sample data that doesn't change. You should be able to refresh this DB with the same set of sample data after your are done running your tests. This will ensure your TEST DB is in a known state. You will then have your input parameters paired with some expected (correct) result. Your test will read in these pairs and execute them against the test DB instance and confirm the expected result is returned. Assuming your tests mutate the DB, you'll want to refresh the DB between runs of your test suite.
Second, if you're already going in and changing some data access implementations for Oracle, why not use this as an opportunity to move some of that business logic out of the DB and into Java? There are many well-documented problems with maintaining large codebases in a DBMS.
should we try to make some small script that looks at each method, parses out relevant information and makes iBatis mappings from that.
I don't recommend this. The time you'd spend tweaking the script for each special case, plus hunting down all the bugs it would introduce would be better spent doing the conversion by a thinking human.
for maintenance and seperation purpose should we have 1 iBatis mapping for each DAO
That's a fine idea. You can then combine them in your sqlMapConfig with
<sqlMap resource="sqlMaps/XXX.xml" />
This will keep your mappings more manageable. Just make sure to specify the namespace attribute in each sqlMap like:
<sqlMap namespace="User">
So that you can reuse mappings between the sqlMaps for instantiating object graphs (example: when loading a User and his Permissions, the User.xml sqlMap calls the Permission.xml mapping).
All our DAO's call Stored Procedures
I don't see what iBatis is buying you here.
It's also not clear what the migration is. Are you saying that you've decided to move all the code into stored procedures, so there's no more in-line SQL? If that's the case, I'd say don't use iBatis. If you're already using Spring, let it call into Oracle using its StoredProcedure object and map the cursors into objects.
The recommendation to create JUnit or, better yet, TestNG tests is spot on. Do that before changing anything.

Resources