How to handle blob field in Spring Batch? - spring

I need to read a blob column from DB e put it in a file and vice versa with Spring Batch.
How can i do it?
I'm using spring batch to read data from DB and put it in CSV file and vice versa but now i have to manage also blob datatype and when there is it i need to create a single file with this data.

One way you could do it is by using LobHandler. You can either get your blob as Bytes and convert it to a String with (getBlobAsBytes), or as BinaryStream with (getBlobAsBinaryStream)
public class YourRowMapper implements RowMapper<YourObjectType> {
#Override
public YourObjectType mapRow(ResultSet resultSet, int i) throws SQLException {
LobHandler lobHandler = new DefaultLobHandler();
String value = new String(lobHandler.getBlobAsBytes(resultSet, "COLUMN_NAME"));
return null;
}
}
Hope this helps

Generally you can use byte[] type field to present blob. It can depent what DB are used.

Related

How to access Spring properties from an entity?

I have a spring app, that pushes data in an s3 bucket.
public class Ebook implements Serializable {
#Column(name= "cover_path", unique = true, nullable = true)
private String coverPath;
private String coverDownloadUrl;
#Value("${aws.cloudfront.region}")
private String awsCloudFrontDns;
#PostLoad
public void init(){
// I want to access the property here
System.out.println("PostConstruct");
String coverDownloadUrl = "https://"+awsCloudFrontDns+"/"+coverPath;
}
When a data is pushed, let's say my cover here, I get the key 1/test-folder/mycover.jpg which is the important part of the future http URL of the data.
When I read the data from database, I enter inside #PostLoad method and I want construct the complete URL using the cloudfront value. This value changes frequently so we don't want to save hardly in the database.
How could I do to construct my full path just after reading the data in database?
The only way to do this is to use a service that update the data after using repository to read it? For readbyId it can be a good solution, but for reading list or using other jpa methods, this solutions won't work because I have each time to create a dedicated service for the update.
It doesn't look good for Entity to depend on property.
How about EntityListener.
#Component
public class EbookEntityListener {
#Value("${aws.cloudfront.region}")
private String awsCloudFrontDns;
#PostLoad
void postload(Ebook entity) { entity.updateDns(awsCloudFrontDns); }
}
I recommend trying this way :)

How to Read Records From Any Database Table and Export As TextFile Using Spring Batch

I am building a spring batch job that will be invoked through a webservice. The webservice will take a list of select and delete statement pairs. The records returned by the select statement will be saved as a CSV on the filesystem and then those same records will be deleted by executing the supplied delete statement.
I have seen a number of ColumnRowMapper examples but that requires me to create a POJO for each table entity. I am looking for a solution that will handle any column from any table. Any suggestions on approach?
****UPDATE****
Since writing this post, I've landed on the following solution.
#Bean
#StepScope
public JdbcCursorItemReader<Map<String, ?>> getRowsOfDataForExportFromTable(){
JdbcCursorItemReader<Map<String, ? extends Object>> databaseReader = new JdbcCursorItemReader<>();
databaseReader.setDataSource(jdbcTemplate.getDataSource());
databaseReader.setSql("select * from SOME_TABLE where last_updated_date < DATE_SUB(NOW(), INTERVAL 10 DAY);");
databaseReader.setRowMapper(new RowMapper<Map<String, ? extends Object>>() {
#Override
public Map<String, ? extends Object> mapRow(ResultSet resultSet, int i) throws SQLException {
Map<String,String> resultMap = new LinkedHashMap<>();
int numOfColumns = resultSet.getMetaData().getColumnCount();
for (int j = 1; j < numOfColumns+1; j++){
String columnName = resultSet.getMetaData().getColumnName(j);
String value = resultSet.getString(j);
resultMap.put(columnName,value);
}
return resultMap;
}
});
return databaseReader;
}
The above ItemReader will build a LinkedHashMap row mapper where the column name is the key and the column value is the value.
Did you try to use Map instead of POJO? You can dynamically fill it in Reader, and then create CSV file from this Map.

Spring Batch unifying multiple reader result

I would like to unify (merge) an item based on 2 differents sources to read (flat file and DB).
How I can perform that using Spring Batch ItemReader ?
public class User {
Long id;
String firstName;
String lastName;
String subscriptionCode;
}
...
stepBuilderFactory.get("createUserStep1")
.<User, User>chunk(1000)
.reader(flatFileReader) // Read line from file and create partial user object (only get id, subscriptionCode)
// Here - How to read DB using User.id read previously and extend User object with additionnal data
.processor(new SkipDuplicatedUserItemProcessor()) // processor skip duplicated user
.writer(itemWriter) // Write somewhere
.build();
This problem should be solved using a reader and a processor.
Use a FlatFileItemReader to read your incomplete object and a ItemProcessor<YourObject,YourObject> to complete your object with missing informations.

Spring data mongodb throws org.springframework.dao.DuplicateKeyException

I have an object that I want to save to both mysql and mongodb
object class is like:
Order:
public long id;
public String brokerID;//UUID
public String userID;//UUID
public String orderID;//UUID
public double price;
public long volume;
the long id field is for mysql to auto_increment and return to me as an OUT parameter,
when I am trying to save two different order objects to mongodb with Spring data, it gives me DuplicateKeyException as it assumes id is my key, and since it's not initialized, both objects have id = 0.
mongoOperation.insert(new Order(UUID.randomUUID().toString(), UUID.randomUUID().toString(), UUID.randomUUID().toString(), 500d, 500l));
mongoOperation.insert(new Order(UUID.randomUUID().toString(), UUID.randomUUID().toString(), UUID.randomUUID().toString(), 500d, 500l));
How am I able to save it to mongodb, rename id to something else?
OK, change the id to some other name like mysqlId resolves this issue

Hibernate annotations image storing

I'm using struts2 and hibernate and I want to know how to store and retrieve images from database using hibernate annotations in the POJO class
Best way to store images in the database in in format of Byte arraylike you have to upload images using struts2 file upload utility and den pass it on to hibernate as byte[] image;
in your mapping you have to do something like
#Column( name = "IMAGE" )
#Lob(type = LobType.BLOB)
private byte[] image;
How to use annotation for this is very well described in the following thread
proper hibernate annotation for byte[]
The answer is as following
private byte[] imageBefore;
#Type(type="org.hibernate.type.BinaryType")
#Column (name = "IMAGE_BEFORE")
public byte[] getImageBefore() {
return imageBefore;
}
refer to this link if you're not using annotations and for a complete reference
Or simply:
#Lob
private byte[] picture;

Resources