Spring Data MongoDB: Dynamic field name converter - spring

How do I set the MongoDB Document field name dynamically (without using #Field)?
#Document
public class Account {
private String username;
}
For example, field names should be capitalized. Result:
{"USERNAME": "hello"}
And I want this dynamic converter to work with any document, so a solution without using generics.

This a bit strange requirement. You can make use of Mongo Listener Life cycle events docs.
#Component
public class MongoListener extends AbstractMongoEventListener<Account> {
#Override
public void onBeforeSave(BeforeSaveEvent<Account> event) {
DBObject dbObject = event.getDBObject();
String username = (String) dbObject.get("username");// get the value
dbObject.put("USERNAME", username);
dbObject.removeField("username");
// You need to go through each and every field recursively in
// dbObject and then remove the field and then add the Field you
// want(with modification)
}
}
This is a bit cluncky, but I believe there is no clean way to do this.

Related

SpringData Mongo projection ignore and overide the values on save

Let me explain my problem with SpringData mongo, I have the following interface declared, I declared a custom query, with a projection to ignore the index, this example is only for illustration, in real life I will ignore a bunch of fields.
public interface MyDomainRepo extends MongoRepository<MyDomain, String> {
#Query(fields="{ index: 0 }")
MyDomain findByCode(String code);
}
In my MongoDB instance, the MyDomain has the following info, MyDomain(code="mycode", info=null, index=19), so when I use the findByCode from MyDomainRepo I got the following info MyDomain(code="mycode", info=null, index=null), so far so good, because this is expected behaviour, but the problem happens when..., I decided to save the findByCode return.
For instance, in the following example, I got the findByCode return and set the info property to myinfo and I got the object bellow.
MyDomain(code="mycode", info="myinfo", index=null)
So I used the save from MyDomainRepo, the index was ignored as expected by the projection, but, when I save it back, with or without an update, the SpringData Mongo, overridden the index property to null, and consequently, my record on the MongoDB instance is overridden too, the following example it's my MongoDB JSON.
{
"_id": "5f061f9011b7cb497d4d2708",
"info": "myinfo",
"_class": "io.springmongo.models.MyDomain"
}
There's a way to tell to SpringData Mongo, to simply ignores the null fields on saving?
Save is a replace operation and you won't be able to signal it to patch some fields. It will replace the document with whatever you send
Your option is to use the extension provided by Spring Data Repository to define custom repository methods
public interface MyDomainRepositoryCustom {
void updateNonNull(MyDomain myDomain);
}
public class MyDomainRepositoryImpl implements MyDomainRepositoryCustom {
private final MongoTemplate mongoTemplate;
#Autowired
public BookRepositoryImpl(MongoTemplate mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#Override
public void updateNonNull(MyDomain myDomain) {
//Populate the fileds you want to patch
Update update = Update.update("key1", "value1")
.update("key2", "value2");
// you can you Update.fromDocument(Document object, String... exclude) to
// create you document as well but then you need to make use of `MongoConverter`
//to convert your domain to document.
// create `queryToMatchId` to mtach the id
mongoTemplate.updateFirst(queryToMatchId, update, MyDomain.class);
}
}
public interface MyDomainRepository extends MongoRepository<..., ...>,
MyDomainRepositoryCustom {
}

Using annotations in spring boot for putting data in correct format

I have a field in my entity that holds phone-number. According to the conventions of the project, I need to save it in E.164 format in the DB. At the moment I use #PrePersist and #PreUpdate annotations for changing the phone number to the specified format. This method is good for one or two entities but it becomes very error-prone when you have to repeat it over and over.
I was thinking that it would be awesome if I could put the code in annotation and the annotation reads the fields and changes its value just before the persistence something like what #LastModifiedDate and annotation do. I searched the web for the codes of this annotation but I didn't understand how they managed it.
How I can write an annotation that reads the value of a field and changes it before persistence, and how I can do it before some specific operations like delete (I want to set some params before deleting the object too)
Take a look at EntityListeners.
You can create a listener that checks your custom annotation and triggers the appropriate methods.
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.FIELD)
public #interface TheCustomAnnotation{
}
#Entity
#EntityListeners(TheListener.class)
public class TheEntity {
#TheCustomAnnotation
private String phoneNumber;
public class TheListener {
#PrePersist
public void prePersist(Object target) {
for(Field field : target.getClass().getDeclaredFields()){
Annotation[] annotations = field.getDeclaredAnnotations();
// Iterate annotations and check if yours is in it.
}
}
This is just an example.
#Pattern is a pretty powerful annotation that would be a good fit for validations if you are experienced with regular expressions.
For example,
#Pattern(regexp="^[0-9]{3}-[0-9]{3}-[0-9]{4}$")
private String phoneNumber;
The downside is that this only works for Strings though.
If you are interested more in conversions than validations, you may want to look into #JsonDeserialize if you are using Jackson.
For example:
#JsonDeserialize(using=PhoneNumberDeserializer.class)
private String phoneNumber;
Pattern phonePattern = Pattern.compile("^[0-9]{3}(.+)[0-9]{3}(.+)[0-9]{4}$");
public class PhoneNumberDeserializer extends JsonDeserializer<String> {
#Override
public String deserialize(JsonParser jsonParser,
DeserializationContext deserializationContext)
throws IOException, JsonProcessingException {
String phone = jsonParser.getText();
if (matcher.matches(phone)) {
Matcher matcher = phonePattern.matcher(phone);
for (int i = 1; i < matcher.groupCount(); i++) {
marcher.group(i).replaceAll(".*", "");
}
}
}
}
This will work for any type, not just strings.
Sorry it's a little convoluted, I was having fun reteaching myself.

Dynamic Index with SpringData ElasticSearch

How can I parameterize a SpringData ElasticSearch index at runtime?
For example, the data model:
#Document(indexName = "myIndex")
public class Asset {
#Id
public String id;
// ...
}
and the repository:
public interface AssetRepository extends ElasticsearchCrudRepository<Asset, String> {
Asset getAssetById(String assetId);
}
I know I can replace myIndex with a parameter, but that parameter will be resolved during instantiation / boot. We have the same Asset structure for multiple clients / tenants, which have their own index. What I need is something like this:
public interface AssetRepository extends ElasticsearchCrudRepository<Asset, String> {
Asset getAssetByIdFromIndex(String assetId, String index);
}
or this
repoInstance.forIndex("myOtherIndex").getAssetById("123");
I know this does not work out of the box, but is there any way to programmatically 'hack' it?
Even though the bean is init at boot time, you can still achieve it by spring expression language:
#Bean
Name name() {
return new Name();
}
#Document(indexName="#{name.name()}")
public class Asset{}
You can change the bean's property to change the index you want to save/search:
assetRepo.save(new Asset(...));
name.setName("newName");
assetRepo.save(new Asset(...));
What should be noticed is not to share this bean in multiple thread, which may mess up your index.
Here is a working example.
org.springframework.data.elasticsearch.repository.ElasticSearchRepository has a method
FacetedPage<T> search(SearchQuery searchQuery);
where SearchQuery can take multiple indices to be used for searching.
I hope it answers

Spring Data Neo4j: Converter of object to string works, but object to long is not executed

I have a really strange issue with converting from domain objects to those Neo4j can natively store as property value. As a test case I use Joda's DateTime. A object of that type can be converted to a String or Long quite easily.
The conversion from DateTime to String works flawlessly with this code:
public class DateTimeToStringConverter implements Converter<DateTime, String> {
#Override
public String convert(DateTime source) {
return source.toDateTimeISO().toString();
}
}
The property shows up in the node:
Node[1] {
'__type__' = '...',
'entityEditedAt' = '2012-12-28T12:32:50.308+01:00',
'entityCreatedAt' = '2012-12-28T12:32:50.297+01:00',
...
}
However if I like to save the DateTime as Long (useful to sort by time in Cypher), it does not work at all. Here is my converter:
public class DateTimeToLongConverter implements Converter<DateTime, Long> {
#Override
public Long convert(DateTime source) {
return source.toDateTimeISO().getMillis();
}
}
The property is not saved on the node. Thus it is missing completely. No exception is thrown. It seems like the conversion code is not called at all.
The converters are hooked to Spring Data using code based configuration:
#Bean
public ConversionServiceFactoryBean conversionService() {
Set converters = Sets.newHashSet();
// These work!
converters.add(new DateTimeToStringConverter());
converters.add(new StringToDateTimeConverter());
// These don't :-(
//converters.add(new DateTimeToLongConverter());
//converters.add(new LongToDateTimeConverter());
ConversionServiceFactoryBean bean = new ConversionServiceFactoryBean();
bean.setConverters(converters);
return bean;
}
Any clues? I'm quite lost here, as it should work in my opinion...
Edit
I found following text in the Spring Data Neo4j documentation:
All fields convertible to a String using the Spring conversion services will be stored as a string.
Does this mean, that only conversions to string are supported? This seems rather limiting.
Tell SDN that you want to store your joda DateTime property as a long with:
#NodeEntity
public class MyEntity {
...
#GraphProperty(propertyType = Long.class)
private DateTime timestamp;
....
}
Then your registered DateTimeToLongConverter will kick in.

How do you handle deserializing empty string into an Enum?

I am trying to submit a form from Ext JS 4 to a Spring 3 Controller using JSON. I am using Jackson 1.9.8 for the serialization/deserialization using Spring's built-in Jackson JSON support.
I have a status field that is initially null in the Domain object for a new record. When the form is submitted it generates the following json (scaled down to a few fields)
{"id":0,"name":"someName","status":""}
After submitted the following is seen in the server log
"nested exception is org.codehaus.jackson.map.JsonMappingException: Can not construct instance of com.blah.domain.StatusEnum from String value '': value not one of the declared Enum instance names"
So it appears that Jackson is expecting a valid Enum value or no value at all including an empty string. How do I fix this whether it is in Ext JS, Jackson or Spring?
I tried to create my own ObjectMapper such as
public class MyObjectMapper extends Object Mapper {
public MyObjectMapper() {
configure(DeserializationConfig.Feature.ACCEPT_EMPTY_STRING_AS_NULL_OBJECT, true);
}
}
and send this as a property to MappingJacksonMappingView but this didn't work. I also tried sending it in to MappingJacksonHttpMessageConverter but that didn't work. Side question: Which one should I be sending in my own ObjectMapper?
Suggestions?
The other thing you could do is create a specialized deserializer (extends org.codehaus.jackson.map.JsonDeserializer) for your particular enum, that has default values for things that don't match. What I've done is to create an abstract deserializer for enums that takes the class it deserializes, and it speeds this process along when I run into the issue.
public abstract class EnumDeserializer<T extends Enum<T>> extends JsonDeserializer<T> {
private Class<T> enumClass;
public EnumDeserializer(final Class<T> iEnumClass) {
super();
enumClass = iEnumClass;
}
#Override
public T deserialize(final JsonParser jp,
final DeserializationContext ctxt) throws IOException, JsonProcessingException {
final String value = jp.getText();
for (final T enumValue : enumClass.getEnumConstants()) {
if (enumValue.name().equals(value)) {
return enumValue;
}
}
return null;
}
}
That's the generic class, basically just takes an enum class, iterates over the values of the enum and checks the next token to match any name. If they do it returns it otherwise return null;
Then If you have an enum MyEnum you'd make a subclass of EnumDeserializer like this:
public class MyEnumDeserializer extends EnumDeserializer<MyEnum> {
public MyEnumDeserializer() {
super(MyEnum.class);
}
}
Then wherever you declare MyEnum:
#JsonDeserialize(using = MyEnumDeserializer.class)
public enum MyEnum {
...
}
I'm not familiar with Spring, but just in case, it may be easier to handle that on the client side:
Ext.define('My.form.Field', {
extend: 'Ext.form.field.Text',
getSubmitValue: function() {
var me = this,
value;
value = me.getRawValue();
if ( value === '' ) {
return ...;
}
}
});
You can also disallow submitting empty fields by setting their allowBlank property to false.
Ended up adding defaults in the EXT JS Model so there is always a value. Was hoping that I didn't have to this but it's not that big of a deal.
I have the same issue. I am reading a JSON stream with some empty strings. I am not in control of the JSON stream, because it is from a foreign service. And I am always getting the same error message. I tried this here:
mapper.getDeserializationConfig().with(DeserializationConfig.Feature.ACCEPT_EMPTY_STRING_AS_NULL_OBJECT);
But without any effect. Looks like a Bug.

Resources