Spring mongodb annotation for 2dsphere index for a geospatial field in java...? - spring

#JsonSerialize
#Document(collection = "fence")
#CompoundIndexes({
#CompoundIndex(name = "loc_groupId_idx",
def = "{ 'loc': 2dsphere, 'groups.groupId': 1 }",
unique = false) })
public class GeofenceMongoVO {
public GeofenceMongoVO() {}
#Id
private String fenceId;
#Field
private Long customerId;
#Field
private String fenceName;
#Field
private Byte type;
This is how I tried to ensure a compound index on a geospatial field and a field of a child document(groupId). But this is not working unfortunately. Is there a way by which I can ensure 2dsphere index from java code via annotations?

As of Spring Data MongoDB 1.10.10.RELEASE, you can annotate any field, whether it be at the document root or in a subdocument with:
#GeoSpatialIndexed(type = GeoSpatialIndexType.GEO_2DSPHERE)
private GeoJsonPoint myGeometry;

I'm not sure if it can be done with annotations yet, but I found a blog post here where they do it with an ensureIndex. Something like th
#Autowired
MongoTemplate template;
public void setupIndex()
{
template.indexOps(Location.class).ensureIndex( new GeospatialIndex("position") );
}

Related

Is there any annotation for creating indexes in Cosmos document?

In JPA we can create indexes for the entity using annotations like -
#Table(indexes = #Index(columnList = "firstName"))
Do we have a similar way of creating indexes for the Cosmos DB?
With Spring data cosmos, you can define custom indexing policy using #CosmosIndexingPolicy annotation for your container.
Example:
#Container(containerName = "users")
#CosmosIndexingPolicy(
includePaths = {
"/name/?"
},
excludePaths = {
"/*"
}
)
public class UserDocument {
#Id
#GeneratedValue
#PartitionKey
private String id;
private String name;
}
Javadoc for more reference: https://azuresdkdocs.blob.core.windows.net/$web/java/azure-spring-data-cosmos-core/3.0.0-beta.1/index.html?com/azure/spring/data/cosmos/core/mapping/CosmosIndexingPolicy.html

Spring Boot + Webflux + Reactive MongoDB - get document by property Id

I'd like to find all Offer documents by Offer.ProductProperties.brand:
#Document(collection = "offers")
public class Offer {
#Id
private String id;
#NotNull
#DBRef
private ProductProperties properties;
ProductProperties:
#Document(collection = "product_properties")
public class ProductProperties {
#Id
private String id;
#NotNull
#NotEmpty
private String brand;
Service:
Flux<ProductProperties> all = productPropertiesRepository.findAllByBrand(brand);
List<String> productPropIds = all.toStream()
.map(ProductProperties::getId)
.collect(Collectors.toList());
Flux<Offer> byProperties = offerRepository.findAllByProperties_Id(productPropIds);
But unfortunately byProperties is empty. Why?
My repository:
public interface OfferRepository extends ReactiveMongoRepository<Offer, String> {
Flux<Offer> findAllByProperties_Id(List<String> productPropertiesIds);
}
How to find all Offers by ProductProperties.brand?
Thanks!
After reading some documentation found out that You cannot query with #DBRef. Hence the message
Invalid path reference properties.brand! Associations can only be
pointed to directly or via their id property
If you remove DBRef from the field, you should be able to query by findAllByProperties_BrandAndProperties_Capacity.
So the only ways is how you are doing. i.e. Fetch id's and query by id.
As I said in the comment, the reason it is not working is because return type of findAllByProperties_Id is a Flux. So unless u execute a terminal operation, you wont have any result. Try
byProperties.collectList().block()

Is there an elegant way to specify entity's fields to ignore by Spring Data Elasticsearch's ObjectMapper while left them being serialized for REST?

In other words, the common Jackson markup is not enough for serializing the same entity for using as the REST request response to the Angular frontend and to pass the object to Elasticsearch via the Jest client. Say, I have an image in the Entity as a byte array, and I'd like it to be stored to DB and be passed to the frontend, but don't like it being indexed by Elasticsearch to reduce the costs or quotas.
Now I have to use Jackson's JsonView to markup the fields to use for the Spring Data Elasticsearch's ObjectMapper:
#Entity
#Table(name = "good")
#org.springframework.data.elasticsearch.annotations.Document(indexName = "good", shards = 1, replicas = 0, refreshInterval = "-1")
public class Good implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "sequenceGenerator")
#SequenceGenerator(name = "sequenceGenerator")
#org.springframework.data.elasticsearch.annotations.Field(type = FieldType.Keyword)
#JsonView(Views.Elasticsearch.class)
private Long id;
#NotNull
#Column(name = "short_name", nullable = false)
#JsonView(Views.Elasticsearch.class)
#Field(store = true, index = true, type=FieldType.Text)
private String shortName;
#NotNull
#Column(name = "description", nullable = false)
#JsonView(Views.Elasticsearch.class)
#Field(store = true, index = true, type=FieldType.Text)
private String description;
#Lob
#Column(name = "image", nullable = false)
#JsonView(Views.Rest.class)
private byte[] image;
#Column(name = "image_content_type", nullable = false)
#JsonView(Views.Rest.class)
private String imageContentType;
#NotNull
#Column(name = "price", nullable = false)
#JsonView(Views.Elasticsearch.class)
#Field(store = true, index = true, type=FieldType.Integer)
private Integer price;
...
I have a clss for Views:
public class Views {
public static class Rest{}
public static class Elasticsearch{}
}
And the ObjectMapper set up in the corresponding Bean:
#Override
public String mapToString(Object object) throws IOException {
log.trace("Object to convert to JSON : {}",object);
log.trace("Converting to json for elasticsearch >>> {}",objectMapper.writer().withView(Views.Elasticsearch.class).writeValueAsString(object));
//log.trace("Converting to json for elasticsearch >>> {}",objectMapper.writeValueAsString(object));
return objectMapper.writerWithView(Views.Elasticsearch.class).writeValueAsString(object);
//return objectMapper.writeValueAsString(object);
}
So, I have to markup all the fields except the ignored to Elasticsearch with #JsonView(Views.Elasticsearch.class) and this is the error prone. Also, this field still requires #Field usage if I like to pass some parameters there like store or value type. When I have #JsonView(Views.Elasticsearch.class), but don't have #Field on some, the fields are created in the index on a fly, that allows them to search, but not in desired way.
The latest is the reason why if I just leave #Field there and don't place it over fields I don't want to index into Elasticsearch, the initial index indeed ignores them, but later requests pass the undesired field when the entity is serialized exactly the same way as it is done for the REST. And the index property is created on a fly, making resources being spent for the large binary object indexing. So it looks like #Field is used for the initial index creation on the startup, but are not configured to be used with ObjectMapper of the Spring Data Elasticsearch.
So, I'd like to make this ObjectMapper take only fields with #Field above them into account, i.e serialize the fields marked with #Field only and use no #JsonView staff. How can I configure it?
These are known problems when using the Jackson Object Mapper in Spring Data Elasticsearch (which als is the default) and this is one of the reasons, why in Spring Data Elasticsearch as of version 3.2 (which currently is in RC2 and will be available as 3.2.0.GA in mid-september), there is a different mapper available, the ElasticsearchEntityMapper.
This mapper still has to be setup explicitly, the reference documentation of 3.2.0.RC2 shows how to do this. Using this mapper the Jackson annotations do not influence the data stored in and read from Elasticsearch. And you can use the org.springframework.data.annotation.Transient annotation on a field to not have it stored in Elasticsearch.
The #Field annotation is used to setup the initial Elasticsearch mapping, properties not having this annotation are automatically mapped by Elasticsearch when they are inserted.

Spring-Mongo : mapping mongo document field/s to BasicDBObject/Map of BasicDBObject of an Entity

I have an entity ProjectCycle mapped to mongo DB collection ProjectCycle. I am trying to retrieve 2 fields, _id and Status. I am able to retrieve both like the following
#Document(collection="ProjectCycle")
public class ProjectCycle {
#Id
private String id;
#Field("Status")
private String status;
//getters and setters
}
Application.java
Query query = new Query();
query.fields().include("Status");
Criteria criteria = new Criteria();
criteria.and("_id").is("1000");
query.addCriteria(criteria);
Iterable<ProjectCycle> objectList = mongoOperations.find(query, ProjectCycle.class);
for(ProjectCycle obj : objectList) {
System.out.println("_id "+obj.getId());
System.out.println("status "+obj.getStatus());
}
Output
_id 1000
status Approved
But, the problem is when i use an Entity with field private DBObject basicDbObject; instead of private String status; i am getting value as null instead of Approved
I have tried like the following
public class ProjectCycle {
#Id
private String id;
private DBObject basicDbObject;
//getter & setter
}
What I am trying to achieve is that, the collection 'ProjectCycle' is very large and creating a POJO corresponding to it is quiet difficult. Also I am only reading data from mongoDB. So creating the entire POJO is time wasting and tedious.
How I can achieve mapping between any field/fields from mongo Collection to entity?.
Will it be possible to create a Map<String, BasicDBObject> objectMap; to fields returned from query? I am using Spring-data-mongodb for the same.
Version details
Spring 4.0.7.RELEASE
spring-data-mongodb 1.7.2.RELEASE
Try mapping your query like below.
Iterable<BasicDBObject> objectList = mongoOperations.find(query, BasicDBObject.class, collectionname);
for(BasicDBObject obj : objectList) {
System.out.println("_id "+obj.get("id"));
System.out.println("status "+obj.get("status"));
}

Spring data elasticSearch : Update entity using alias

I'm currently fighting with the spring-data-elasticsearch API. I need it to work on an alias with several indexes pointing on it.
Each indexes have the same types stored, but are juste day to day storage (1rst index are monday's resulsts, second are tuesday's resulsts....).
Some of the ElasticsearchRepository methods don't work because of the alias. I currently managed to do a search (findOne() equivalent) but I am not able to update an entity.
I don't know how to achieve that, I looked to the documentation and samples.. but I'm stuck.
My repository
public interface EsUserRepository extends ElasticsearchRepository<User, String>
{
#Query("{\"bool\" : {\"must\" : {\"term\" : {\"id_str\" : \"?0\"}}}}")
User findByIdStr(String idStr);
}
My Entity
#Document(indexName = "osintlab", type = "users")
public class User
{
// Elasticsearch internal id
#Id
private String id;
// Just a test to get the real object index (_index field), in order to save it
#Field(index = FieldIndex.analyzed, type = FieldType.String)
private String indexName;
// Real id, saved under the "id_str" field
#Field(type = FieldType.String)
private String id_str;
#Field(type = FieldType.String)
private List<String> tag_user;
}
What I tested
final IndexQuery indexQuery = new IndexQuery();
indexQuery.setId(user.getId());
indexQuery.setObject(user);
esTemplate.index(indexQuery);
userRepository.index(user));
userRepository.save(user))

Resources