Spring Boot - MongoDb find by manually created query - spring

In my application I'm creating manually some Queries and then I would like to simply find all entities which matches to the created queries. For that now I have to create a CustomXYRepository interface and implementation just to call the MongoOperations's find method. (example: this.mongoOperation.find(query, AppointmentDao.class);)
Again, my queries are constructed by the application itself, hence I would not use "static" #Query annotation and I would like to also avoid the custom repository implementations just for one single line.
Is there any possibility to write a method like this in the Repository interface?
public interface CalendarRepository extends MongoRepository<AppointmentDao, String> {
...
/**
* Find appointments by the given query.
*
* #param query
* #return
*/
List<AppointmentDao> findByQuery(Query query);
...
}
Thank you so much, have a nice day!

Related

How to create a Reactive Inbound Channel Adapter in Spring Integration Reactor

I'd like to understand how to create a reactive channel adapter for Spring Integration with Reactor core. I've understood from other forums I've read that this Mongo DB reactive adapter can be a good example, but it contains lots of Mongo domain specific classes.
I've read the Reactive part of the docs, and I see that there is a need to implement MessageProducerSupport, but from the code example it looks that there is a need of implementing a class the extends MessageProducerSpec and calls the first one. Can someone give an example for the most basic usage and explain what is really a demand for creating such a channel adapter? What I understand that I should do is something like:
public IntegrationFlow buildPipe() {
return IntegrationFlows.from(myMessageProducerSpec)
.handle(reactiveMongoDbStoringMessageHandler, "handleMessage")
.handle(writeToKafka)
.get();
}
The MessageProducerSpec is for Java DSL. It has nothing to do with low-level logic of the channel adapter. If you have a MessageProducerSupport, then this one is good enough for you to use in the flow definition:
/**
* Populate the provided {#link MessageProducerSupport} object to the {#link IntegrationFlowBuilder} chain.
* The {#link org.springframework.integration.dsl.IntegrationFlow} {#code startMessageProducer}.
* #param messageProducer the {#link MessageProducerSupport} to populate.
* #return new {#link IntegrationFlowBuilder}.
*/
public static IntegrationFlowBuilder from(MessageProducerSupport messageProducer) {
See more in docs about arbitrary channel adapter usage in the Java DSL: https://docs.spring.io/spring-integration/docs/current/reference/html/dsl.html#java-dsl-protocol-adapters
But again: forget about Java DSL API for your channel adapter. Implement that channel adapter first. Yes, reactive MessageProducerSupport must use subscribeToPublisher() in its doStart() implementation. The rest of the logic around building the Flux from source system is up to you and the library you are going to rely on.
There is also a ReactiveRedisStreamMessageProducer and ZeroMqMessageProducer, but I cannot say that their code is easier to digest than the mentioned MongoDbChangeStreamMessageProducer.

Create beans on application load

Before starting to read, that's a list of the links I've tried to read to solve this issue:
Spring : Create beans at runtime
How to add bean instance at runtime in spring WebApplicationContext?
https://www.logicbig.com/tutorials/spring-framework/spring-core/bean-definition.html
Spring - Programmatically generate a set of beans
How do I create beans programmatically in Spring Boot?
I'm using SpringBoot and RabbitMQ for my services and lately, I've read this article: https://programmerfriend.com/rabbit-mq-retry/
I would like to generalize this idea and create a SpringBoot library that can get in the names of the queues that need to be created via the application.properties file and do everything behind the scenes, so the creation of the different queues and the binding between them will happen "automagically" whenever I'll integrate the library into a service.
The problem which I'm facing is basically to have the same behavior #Bean gives me. I need to repeat this behavior multiple N times (as many as the number of names specified in my application.properties).
From what I could see online, there's a possibility to create beans when the app is loaded, but the only way to do it is by telling the context what type of class you want to generate and let Spring handle it itself. as the class I want to generate does not contain a default constructor (and does not controlled by me), I wondered if there's a way to create an object and add this specific instance to the application context.
but the only way to do it is by telling the context what type of class you want to generate and let Spring handle it itself
No; since 5.0 you can now provide a Supplier for the bean.
/**
* Register a bean from the given bean class, using the given supplier for
* obtaining a new instance (typically declared as a lambda expression or
* method reference), optionally customizing its bean definition metadata
* (again typically declared as a lambda expression).
* <p>This method can be overridden to adapt the registration mechanism for
* all {#code registerBean} methods (since they all delegate to this one).
* #param beanName the name of the bean (may be {#code null})
* #param beanClass the class of the bean
* #param supplier a callback for creating an instance of the bean (in case
* of {#code null}, resolving a public constructor to be autowired instead)
* #param customizers one or more callbacks for customizing the factory's
* {#link BeanDefinition}, e.g. setting a lazy-init or primary flag
* #since 5.0
*/
public <T> void registerBean(#Nullable String beanName, Class<T> beanClass,
#Nullable Supplier<T> supplier, BeanDefinitionCustomizer... customizers) {
So
context.registerBean("fooBean", Foo.class, () -> someInstance);

Spring Integration Flow: filter based on headers without SpEl expression

I have the following Spring integration flow:
#Bean
public IntegrationFlow checkoutEventFlow(){
return IntegrationFlows.from(EventASink.INPUT)
.filter("headers['type'] == 'TYPE_A'") //1
.transform(Transformers.fromJson(EventA.class)) //2
.<EventA, EventB> transform(eventA ->
new EventB(
eventA.getSomeField(),
eventB.getOtherField()))
.handle(Http.outboundGateway(uri).httpMethod(HttpMethod.POST))
.get();
}
1) I would like to filter a message based on its headers without using a SpEl expression (look at //1), is it possible?
2) Is there another mechanism for JSON conversion to POJO without //2? I like the way #StreamListener can be written in terms of POJO and conversion is done behind the scenes.
Thanks in advance.
Without SpEL in the filter() you can do a Java lambda instead:
.filter(Message.class, m -> m.getHeaders().get("type") == "TYPE_A")
The Spring Cloud Stream is opinionated Framework where a JSON is as a default content type for data traveling through the flow and beyond into/from the target messaging system.
The Spring Integration is a library to let you build integration applications. There we just can't have any opinion in regards to some default content type conversion. There there is no any out-of-the-box guessing that you are going to transform your incoming byte[] to some POJO because of JSON. Although to honor a bit some possibilities which are visible the same way Spring Cloud Stream does, we have a hook in the POJO method invoker to transform from JSON into expected POJO. But that is done only against custom POJO methods when they also marked with the #serviceActivator. From here we can't assume your expectation in the .transform() lambda. You need to have some service with method and use it in the:
/**
* Populate a {#link ServiceActivatingHandler} for the
* {#link org.springframework.integration.handler.MethodInvokingMessageProcessor}
* to invoke the {#code method} for provided {#code bean} at runtime.
* In addition accept options for the integration endpoint using {#link GenericEndpointSpec}.
* #param service the service object to use.
* #param methodName the method to invoke.
* #return the current {#link IntegrationFlowDefinition}.
*/
public B handle(Object service, String methodName) {
This way it is going to work the same way you see in Spring Cloud Stream with #StreamListener.

Java Spring, JPA - like expression with wildcard

I am struggling with creation of JPQL statement using Like expression in Java Spring application. My aim is to implement simple case insensitive search function.
My code, which should return a list of companies containing a keyWord in their name looks like this:
List<Company> companies = em.createQuery("select cmp from JI_COMPANIES as companies where UPPER(cmp.name) LIKE :keyWord", Company.class)
.setParameter("keyWord","%" + keyWord.toUpperCase() + "%").getResultList();
return companies;
However, this query only works when my keyWord matches the name of company (Like statement is not used).
When testing the function above, I see in my console message from Hibernate:
Hibernate: select ... /* all my parameters */ ... where company0_.CMPN_NAME=?
It seems that my query is translated to cmpn_name='name' instead of using like expression.
You can look about Hibernate batch processing best pratices and Stateless session.
It was my fault due to not understanding how Spring Data JPA library works.
I have tried to create a custom implementation of myCompanyRepository (myCompanyRepository extends JpaRepository, CompanyRepositoryCustom).
The code which I mentioned above was located in my CompanyRepositoryCustomImpl class which implements CompanyRepositoryCustom interface. However I used method name "findCompaniesByName(String name)". JPA automatically creates a query from this method name and my implementation is not used.
Here is the link for Spring Data JPA reference

How to handle a large set of data using Spring Data Repositories?

I have a large table that I'd like to access via a Spring Data Repository.
Currently, I'm trying to extend the PagingAndSortingRepository interface but it seems I can only define methods that return lists, eg.:
public interface MyRepository extends
PagingAndSortingRepository<MyEntity, Integer>
{
#Query(value="SELECT * ...")
List<MyEntity> myQuery(Pageable p);
}
On the other hand, the findAll() method that comes with PagingAndSortingRepository returns an Iterable (and I suppose that the data is not loaded into memory).
Is it possible to define custom queries that also return Iterable and/or don't load all the data into memory at once?
Are there any alternatives for handling large tables?
We have the classical consulting answer here: it depends. As the implementation of the method is store specific, we depend on the underlying store API. In case of JPA there's no chance to provide streaming access as ….getResultList() returns a List. Hence we also expose the List to the client as especially JPA developers might be used to working with lists. So for JPA the only option is using the pagination API.
For a store like Neo4j we support the streaming access as the repositories return Iterable on CRUD methods as well as on the execution of finder methods.
The implementation of findAll() simply loads the entire list of all entities into memory. Its Iterable return type doesn't imply that it implements some sort of database level cursor handling.
On the other hand your custom myQuery(Pageable) method will only load one page worth of entities, because the generated implementation honours its Pageable parameter. You can declare its return type either as Page or List. In the latter case you still receive the same (restricted) number of entities, but not the metadata that a Page would additionally carry.
So you basically did the right thing to avoid loading all entities into memory in your custom query.
Please review the related documentation here.
I think what you are looking for is Spring Data JPA Stream. It brings a significant performance boost to data fetching particularly in databases with millions of record. In your case you have several options which you can consider
Pull all data once in memory
Use pagination and read pages each time
Use something like Apache Spark
Streaming data using Spring Data JPA
In order to make Spring Data JPA Stream to work, we need to modify our MyRepository to return Stream<MyEntity> like this:
public interface MyRepository extends PagingAndSortingRepository<MyEntity, Integer> {
#QueryHints(value = {
#QueryHint(name = HINT_CACHEABLE, value = "false"),
#QueryHint(name = READ_ONLY, value = "true")
})
#Query(value="SELECT * ...")
Stream<MyEntity> myQuery();
}
In this example, we disable second level caching and hint Hibernate that the entities will be read only. If your requirement is different, make sure to change those settings accordingly for your requirements.

Resources