Using Elasticsearch java config client in Spring Batch - elasticsearch

I am trying to write a custom writer of elasticsearch which would index data in a spring batch implementation.
I could find the below code as Java config for the elasticsearch.
Anyone who has used this, Can please share where to call this configuration?
#Configuration
#EnableElasticsearchRepositories(basePackages = "org/springframework/data/elasticsearch/repositories")
static class Config {
#Value("${esearch.port}") int port;
#Value("${esearch.host}") String hostname;
#Bean
public ElasticsearchOperations elasticsearchTemplate() {
return new ElasticsearchTemplate(client());
}
#Bean
public Client client(){
TransportClient client= new TransportClient();
TransportAddress address = new InetSocketTransportAddress(hostname, port);
client.addTransportAddress(address);
return client;
}
}

The code that you listed above is basically your implementation details of a - Transport Client element pointing to an instance of Elasticsearch Server i.e. this defines your persistent layer by using Spring Data.
This code will be used by your elasticsearch repositories i.e. repositories that you define by extending - ElasticsearchRepository from Spring Data.
You need to edit #EnableElasticsearchRepositories in code listed by you to actually point to package where you are keeping your repository definitions - no other call would be needed.
When you are going to write/index data to elasticsearch, you work with ElasticsearchRepository interface and you need to define your own repositories and these repositories work with instances as per listing in your code.
Hope it helps !!

Related

Unable to configure head collection and snapshot collection name in Javers framework in spring boot

I am using spring boot(v 2.6.6), java11 and javers-spring-boot-starter-mongo(6.9.1). I tried changing the name of the collections that Javers creates by default in MongoDB as mentioned in the javers docs.
This is what my configuration code looks like.
#Configuration
#ComponentScan(basePackages = "org.javers.spring")
#EnableMongoRepositories("org.javers.spring")
#EnableAspectJAutoProxy
public class JaversSpringMongoApplicationConfig {
#Autowired
MongoClient mongoClient;
#Value("${db.name}")
String dbName;
#Bean
public void mongoConfigJavers(){
MongoRepository mongoRepository = new MongoRepository(getMongoDb(dbName),
mongoRepositoryConfiguration()
.withSnapshotCollectionName("jv_custom_snapshots")
.withHeadCollectionName("jv_custom_head_id")
.build());
}
private MongoDatabase getMongoDb(String dbName) {
return mongoClient.getDatabase(dbName);
}
}
On running the code, the collections name are not changing. I have tried dropping the previous collection and running the code again. But, the names of collections are still coming as jv_snapshots and jv_head_id instead of jv_custom_snapshots and jv_custom_head_id.
What else do I need to do or how can I find where am I going wrong??
Fixed it by changing the Bean name from mongoConfigJavers to Javers. Basically Javers framewrok automatically configures this Bean and this can be overwritten by defining a new bean with the same name. Reference
New code looks something like this:
#Bean
public Javers javers(){
MongoRepository mongoRepository = new MongoRepository(getMongoDb(dbName),
mongoRepositoryConfiguration()
.withSnapshotCollectionName("jv_custom_snapshots")
.withHeadCollectionName("jv_custom_head_id")
.build());
return JaversBuilder.javers()
.registerJaversRepository(mongoRepository)
.build();
}
An example bean configuration of Javers. Further details
#Bean(name = "JaversFromStarter")
#ConditionalOnMissingBean
public Javers javers() {
logger.info("Starting javers-spring-boot-starter-mongo ...");
MongoDatabase mongoDatabase = initJaversMongoDatabase();
MongoRepository javersRepository = createMongoRepository(mongoDatabase);
JaversBuilder javersBuilder = TransactionalMongoJaversBuilder.javers()
.registerJaversRepository(javersRepository)
.withTxManager(mongoTransactionManager.orElse(null))
.withProperties(javersMongoProperties)
.withObjectAccessHook(javersMongoProperties.createObjectAccessHookInstance());
plugins.forEach(plugin -> plugin.beforeAssemble(javersBuilder));
return javersBuilder.build();
}
Just for information with the release of 6.10.0, these collection names are also now configurable via Spring boot configuration files(application.yml)
6.10.0
released on 2023-02-15
1254
Added the possibility to disable the schema management in Mongo Repository via application code.
Usage:
MongoRepository mongoRepository = new MongoRepository(getMongoDb(),
mongoRepositoryConfiguration()
.withSnapshotCollectionName("jv_custom_snapshots_")
.withHeadCollectionName("jv_custom_head_id_")
.withSchemaManagementEnabled(false)
.build())
See also MongoE2EWithSchemaEnabledTest.groovy
Made Mongo repository configuration parameters(snapshotCollectionName, headCollectionName, schemaManagementEnabled) configurable through Spring boot configuration files (application.yml)
Usage:
javers:
snapshotCollectionName: "jv_custom_snapshots"
headCollectionName: "jv_custom_head_id"
schemaManagementEnabled: false

Spring data with multiple modules not working

I'm trying to set up a project with two data sources, one is MongoDB and the other is Postgres. I have repositories for each data source in different packages and I annotated my main class as follows:
#Import({MongoDBConfiguration.class, PostgresDBConfiguration.class})
#SpringBootApplication(exclude = {
MongoRepositoriesAutoConfiguration.class,
JpaRepositoriesAutoConfiguration.class
})
public class TemporaryRunner implements CommandLineRunner {
...
}
MongoDBConfiguration:
#Configuration
#EnableMongoRepositories(basePackages = {
"com.example.datastore.mongo",
"com.atlassian.connect.spring"})
public class MongoDBConfiguration {
...
}
PostgresDBConfiguration:
#Configuration
#EnableJpaRepositories(basePackages = {
"com.example.datastore.postgres"
})
public class PostgresDBConfiguration {
...
}
And even though I specified the base packages as described in documentation, I still get those messages in the console:
13:10:44.238 [main] [] INFO o.s.d.r.c.RepositoryConfigurationDelegate - Multiple Spring Data modules found, entering strict repository configuration mode!
13:10:44.266 [main] [] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.atlassian.connect.spring.AtlassianHostRepository.
I managed to solve this issue for all my repositories by using MongoRepository and JpaRepository but AtlassianHostRepository comes from an external lib and it is a regular CrudRepository (which totally makes sense because the consumer of the lib can decide what type of DB he would like to use). Anyway it looks that basePackages I specified are completely ignored and not used in any way, even though I specified com.atlassian.connect.spring package only in #EnableMongoRepositories Spring Data somehow can't figure out which data module should be used.
Am I doing something wrong? Is there any other way I could tell spring data to use mongo for AtlassianHostRepository without changing the AtlassianHostRepository.class itself?
The only working solution I found was to let spring data ignore AtlassianHostRepository (because it couldn't figure out which data source to use) then create a separate configuration for it, and simply create it by hand:
#Configuration
#Import({MongoDBConfiguration.class})
public class AtlassianHostRepositoryConfiguration {
private final MongoTemplate mongoTemplate;
#Autowired
public AtlassianHostRepositoryConfiguration(final MongoTemplate mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#Bean
public AtlassianHostRepository atlassianHostRepository() {
RepositoryFactorySupport factory = new MongoRepositoryFactory(mongoTemplate);
return factory.getRepository(AtlassianHostRepository.class);
}
}
This solution works fine for a small or limited number of repositories used from a library, it would be rather cumbersome to create all the repositories by hand when there are more of them, but after reading the source code of spring-data I see no way to make it work with basePackages as stated in documentation (I may be wrong though).

Implementation of DynamoDB for Spring Boot

I am trying to implement a backend DynamoDB for my Spring Boot application. But AWS recently updated their SDKs for DynamoDB. Therefore, almost all of the tutorials available on the internet, such as http://www.baeldung.com/spring-data-dynamodb, aren't directly relevant.
I've read through Amazon's SDK documentation regarding the DynamoDB class. Specifically, the way the object is instantiated and endpoints/regions set have been altered. In the past, constructing and setting endpoints would look like this:
#Bean
public AmazonDynamoDB amazonDynamoDB() {
AmazonDynamoDB amazonDynamoDB
= new AmazonDynamoDBClient(amazonAWSCredentials());
if (!StringUtils.isEmpty(amazonDynamoDBEndpoint)) {
amazonDynamoDB.setEndpoint(amazonDynamoDBEndpoint);
}
return amazonDynamoDB;
}
#Bean
public AWSCredentials amazonAWSCredentials() {
return new BasicAWSCredentials(
amazonAWSAccessKey, amazonAWSSecretKey);
}
However, the setEndpoint() method is now deprecated, and [AWS documentation][1] states that we should construct the DynamoDB object through a builder:
AmazonDynamoDBClient() Deprecated. use
AmazonDynamoDBClientBuilder.defaultClient()
This other StackOverflow post recommends using this strategy to instantiate the database connection object:
DynamoDB dynamoDB = new DynamoDB(AmazonDynamoDBClientBuilder.standard().withEndpointConfiguration(new EndpointConfiguration("http://localhost:8000", "us-east-1")).build());
Table table = dynamoDB.getTable("Movies");
But I get an error on IntelliJ that DynamoDB is abstract and cannot be instantiated. But I cannot find any documentation on the proper class to extend.
In other words, I've scoured through tutorials, SO, and the AWS documentation, and haven't found what I believe is the correct way to create my client. Can someone provide an implementation that works? I'm specifically trying to set up a client with a local DynamoDB (endpoint at localhost port 8000).
I think I can take a stab at answering my own question. Using the developer guide here for DynamoDB Mapper you can implement a DynamoDB Mapper object that takes in your client and performs data services for you, like loading, querying, deleting, saving (essentially CRUD?). Here's the documentation I found helpful.
I created my own class called DynamoDBMapperClient with this code:
private AmazonDynamoDB amazonDynamoDB = AmazonDynamoDBClientBuilder.standard().withEndpointConfiguration(
new EndpointConfiguration(amazonDynamoDBEndpoint, amazonAWSRegion)).build();
private AWSCredentials awsCredentials = new AWSCredentials() {
#Override
public String getAWSAccessKeyId() {
return null;
}
#Override
public String getAWSSecretKey() {
return null;
}
};
private DynamoDBMapper mapper = new DynamoDBMapper(amazonDynamoDB);
public DynamoDBMapper getMapper() {
return mapper;
}
Basically takes in endpoint and region configurations from a properties file, then instantiates a new mapper that is accessed with a getter.
I know this may not be the complete answer, so I'm leaving this unanswered, but at least it's a start and you guys can tell me what I'm doing wrong!

Spring Data Redis - #Transactional support on Repository

We're using spring-boot-starter-parent 1.4.1 together with spring-boot-starter-redis and spring-boot-starter-data-redis. We use redis for (a) message passing to an external app and (b) to store some information in a repository. Our redis config looks like this
#Configuration
#EnableRedisRepositories
open class RedisConfig {
#Bean // for message passing
#Profile("test")
open fun testRedisChannelProvider(): RedisParserChannelProvider {
return RedisParserChannelProvider("test_parser:parse.job", "test_parser:parse.joblist")
}
#Bean // for message passing
#Profile("!test")
open fun productionRedisChannelProvider(): RedisParserChannelProvider {
return RedisParserChannelProvider("parser:parse.job", "parser:parse.joblist")
}
#Bean // for message passing
open fun parseJobTemplate(connectionFactory: RedisConnectionFactory): RedisTemplate<String, ParseJob> {
val template = RedisTemplate<String, ParseJob>()
template.connectionFactory = connectionFactory
template.valueSerializer = Jackson2JsonRedisSerializer<ParseJob>(ParseJob::class.java)
return template
}
//#Bean // for message passing
//open fun parseJobListTemplate ...
// no template for repository
With this config the message passing is working nicely as well as writing to/reading from the repository. Now I am trying to get #Transactional working for communication with the repository, but I have not succeeded so far. I already followed the example config in the docs and manually enabled transaction support on it:
#Bean
open fun redisTemplate(): RedisTemplate<*, *> {
val template = RedisTemplate<ByteArray, ByteArray>()
template.setEnableTransactionSupport(true)
return template
}
...but this is apparently not the way to go. Currently, everything written to the repository (in particular during tests) stays there.
#Transactional use of Redis repositories is not possible, and I doubt it will work at all.
The reason behind is how Spring Data Redis repository support works:
RedisKeyValueAdapter relies on results of write and read operations that are issued while persisting an object.
Redis transactions behave more like deferred batches, so it's not possible to wrap Redis repository support inside a transaction but require a different approach and impose several limitations.

How do a I setup mongodb messagestore in aggregator using annotation

I am trying to add an aggregator to my code.
Couple of problems I am facing.
1. How do I setup a messagestore using annotations only.
2. Is there any design of aggregator works ? basically some picture explaining the same.
#MessageEndpoint
public class Aggregator {
#Aggregator(inputChannel = "abcCH",outputChannel = "reply",sendPartialResultsOnExpiry = "true")
public APayload aggregatingMethod(List<APayload> items) {
return items.get(0);
}
#ReleaseStrategy
public boolean canRelease(List<Message<?>> messages){
return messages.size()>2;
}
#CorrelationStrategy
public String correlateBy(Message<AbcPayload> message) {
return (String) message.getHeaders().get(RECEIVED_MESSAGE_KEY);
}
}
In the Reference Manual we have a note:
Annotation configuration (#Aggregator and others) for the Aggregator component covers only simple use cases, where most default options are sufficient. If you need more control over those options using Annotation configuration, consider using a #Bean definition for the AggregatingMessageHandler and mark its #Bean method with #ServiceActivator:
And a bit below:
Starting with the version 4.2 the AggregatorFactoryBean is available, to simplify Java configuration for the AggregatingMessageHandler.
So, actually you should configure AggregatorFactoryBean as a #Bean and with the #ServiceActivator(inputChannel = "abcCH",outputChannel = "reply").
Also consider to use Spring Integration Java DSL to simplify your life with the Java Configuration.

Resources