Spring Boot Actuator Health Indicator - spring

We have been using Spring Boot for several projects now, We are using the latest version 1.2.3. We are incorporating Actuator. So far things are working well except we are finding that the /health indicator [default] is showing that the service is down. This is not true. These services are that implement with datasources. It may call other SOAP or Rest services. What is the health service looking at to measure whether a service is down?

As #derFuerst said the DataSourceHealthIndicator has the default query to check whether the DB is up or not.
If you want to use this the proper vendor specific query you should write your own health indicator in your configuration class, like this in case of Oracle data source:
#Autowired(required = false)
private DataSource dataSource;
#Bean
#Primary
public DataSourceHealthIndicator dataSourceHealthIndicator() {
return new DataSourceHealthIndicator(dataSource, "SELECT 1 FROM DUAL");
}

The DataSourceHealthIndicator is used to check availablity. The default query is SELECT 1, but there are some product specific queries, too.
You can write your own HealthIndicator. Either you implement the interface or extend the AbstractHealthIndicator.
To disable the default db-health-check put this line into your application properties management.health.db.enabled=false.
Hope that helps

The above comment helpedme to init my research but for me it was not enough :
#Bean
#Primary
public DataSourceHealthIndicator dataSourceHealthIndicator() {
return new DataSourceHealthIndicator(dataSource, "SELECT 1 FROM DUAL");
}
Here is the configuration that helped me to make it run: Define HealthIndicator #Bean like the follow and provide the required query :
#Bean
#Primary
public HealthIndicator dbHealthIndicator() {
return new DataSourceHealthIndicator(dataSource, "SELECT 1 FROM DUMMY");
}
If no Query is providen the SELECT 1 will be used . As #derFuerst said will be used , Here is the defailt implementation of DataSourceHealthIndicator :
public DataSourceHealthIndicator(DataSource dataSource, String query) {
super("DataSource health check failed");
this.dataSource = dataSource;
this.query = query;
this.jdbcTemplate = dataSource != null ? new JdbcTemplate(dataSource) : null;
}
...
protected String getValidationQuery(String product) {
String query = this.query;
if (!StringUtils.hasText(query)) {
DatabaseDriver specific = DatabaseDriver.fromProductName(product);
query = specific.getValidationQuery();
}
if (!StringUtils.hasText(query)) {
query = "SELECT 1";
}
return query;
}

Hi everyone I'm a beginner to health check using actuator. I used the below solution and it worked,
#Autowired(required = false)
private DataSource dataSource;
#Bean
#Primary
public DataSourceHealthIndicator dataSourceHealthIndicator() {
return new DataSourceHealthIndicator(dataSource, "SELECT 1 FROM DUAL");
}
But can anyone please let me know how validation query is working even though there isn't a table named Dual. Also as per my understanding when we request "/actuator/health", all implementations of HealthIndicator are called automatically and health check methods are executed. Kindly correct me if I'm wrong

Related

Implementing axon snapshot with springboot 2.3.3 and axon 4.4.2

can anyone suggest any tutorial/sample project for Implementing Snapshot in AXON 4.4.2 with springBoot 2.3.3.
i went through the documentation(https://docs.axoniq.io/reference-guide/axon-framework/tuning/event-snapshots#snapshotting) and did below:
The AxonConfig.class
#Configuration
public class AxonConfig {
#Bean
public SnapshotTriggerDefinition app1SnapshotTrigger(Snapshotter snapshotter) {
return new EventCountSnapshotTriggerDefinition(snapshotter, 10);
}
}
The Aggregate
#Aggregate(snapshotTriggerDefinition = "app1SnapshotTrigger")
public class MyAggregate {
#AggregateIdentifier
private String id;
private String name;
#AggregateMember
private List<Address> addresses = new ArrayList<>();
private MyAggregate () {
}
#CommandHandler
private MyAggregate (CreateNameCommand createNameCommand) {
-----
}
#EventSourcingHandler
private void on(NameCreatedEvent nameCreatedEvent) {
----
}
Am i missing something. Will it create a snapshot at the threshold value 10.
Thanks.
unfortunately we have no sample demo ready to show in this case.
From your code snippet looks that all is in place. Maybe there is some other configuration that is taking over your annotation.
To give a try, I applied your configuration to our https://github.com/AxonIQ/giftcard-demo/
First note that can guide is the following
if you declared a Repository as we did in https://github.com/AxonIQ/giftcard-demo/blob/master/src/main/java/io/axoniq/demo/giftcard/command/GcCommandConfiguration.java#L17 this configuration will take over Annotation placed into your aggregate. If you prefer annotation, you can remove this Bean definition.
Here the piece of code, instead, to have this configured as a Bean
#Bean
public Repository<GiftCard> giftCardRepository(EventStore eventStore, SnapshotTriggerDefinition giftCardSnapshotTrigger) {
return EventSourcingRepository.builder(GiftCard.class)
.snapshotTriggerDefinition(giftCardSnapshotTrigger)
.eventStore(eventStore)
.build();
}
#Bean
public SpringAggregateSnapshotterFactoryBean snapshotter() {
var springAggregateSnapshotterFactoryBean = new SpringAggregateSnapshotterFactoryBean();
//Setting async executors
springAggregateSnapshotterFactoryBean.setExecutor(Executors.newSingleThreadExecutor());
return springAggregateSnapshotterFactoryBean;
}
#Bean("giftCardSnapshotTrigger")
public SnapshotTriggerDefinition giftCardSnapshotTriggerDefinition(Snapshotter snapshotter) {
return new EventCountSnapshotTriggerDefinition(snapshotter, 10);
}
You can check that your snapshot is working fine looking at client log : after 10 events on the same agggregateId, you should find this info log entry
o.a.a.c.event.axon.AxonServerEventStore : Snapshot created
To check you can use the REST api to retrieve the events from an aggregate
curl -X GET "http://localhost:8024/v1/events?aggregateId=A01"
This will produce a stream containing events starting from the latest Snapshot: you will have nine events listed until the tenth event will be processed. After that, the endpoint will list events from the snapshot.
You can also check /actuator/health endpoint: it will shows the last snapshot token if the showDetails is enabled (enabled by default in EE, not enabled by default in SE).
Corrado.

Get Aggregate Information from Elasticsearch using Spring-data-elasticsearch, ElasticsearchRepository

I would like to get aggregate results from ES like avgSize (avg of a field with name 'size'), totalhits for documents that match a term, and some other aggregates in future, for which I don't think ElasticsearchRepository has any methods to call. I built Query and Aggregate Builders as below. I want to use my Repository interface but I am not sure of what should the return ObjectType be ? Should it be a document type in my DTOs ? Also I have seen examples where the searchQueryis passed directly to ElasticsearchTemplate but then what is the point of having Repository interface that extends ElasticsearchRepository
Repository Interface
public interface CCFilesSummaryRepository extends ElasticsearchRepository<DataReferenceSummary, UUID> {
}
Elastic configuration
#Configuration
#EnableElasticsearchRepositories(basePackages = "com.xxx.repository.es")
public class ElasticConfiguration {
#Bean
public ElasticsearchOperations elasticsearchTemplate() throws UnknownHostException {
return new ElasticsearchTemplate(elasticsearchClient());
}
#Bean
public Client elasticsearchClient() throws UnknownHostException {
Settings settings = Settings.builder().put("cluster.name", "elasticsearch").build();
TransportClient client = new PreBuiltTransportClient(settings);
client.addTransportAddress(new TransportAddress(InetAddress.getLocalHost(), 9200));
return client;
}
}
Service Method
public DataReferenceSummary createSummary(final DataSet dataSet) {
try {
QueryBuilder queryBuilder = QueryBuilders.matchQuery("type" , dataSet.getDataSetCreateRequest().getContentType());
AvgAggregationBuilder avgAggregationBuilder = AggregationBuilders.avg("avg_size").field("size");
ValueCountAggregationBuilder valueCountAggregationBuilder = AggregationBuilders.count("total_references")
.field("asset_id");
SearchQuery searchQuery = new NativeSearchQueryBuilder()
.withQuery(queryBuilder)
.addAggregation(avgAggregationBuilder)
.addAggregation(valueCountAggregationBuilder)
.build();
return ccFilesSummaryRepository.search(searchQuery).iterator().next();
} catch (Exception e){
e.printStackTrace();
}
return null;
}
DataReferernceSummary is just a POJO for now and for which I am getting an error during my build that says Unable to build Bean CCFilesSummaryRepository, illegalArgumentException DataReferernceSummary. is not a amanged Object
First DataReferenceSummary must be a class annotated with #Document.
In Spring Data Elasticsearch 3.2.0 (the current version) you need to define the repository return type as AggregatedPage<DataReferenceSummary>, the returned object will contain the aggregations.
From the upcoming version 4.0 on, you will have to define the return type as SearchHits<DataReferenceSummary> and find the aggregations in this returned object.

Cache Kafka Records using Caffeine Cache Springboot

I am trying to cache Kafka Records within 3 minutes of interval post that it will get expired and removed from the cache.
Each incoming records which is fetched using kafka consumer written in springboot needs to be updated in cache first then if it is present i need to discard the next duplicate records if it matches the cache record.
I have tried using Caffeine cache as below,
#EnableCaching
public class AppCacheManagerConfig {
#Bean
public CacheManager cacheManager(Ticker ticker) {
CaffeineCache bookCache = buildCache("declineRecords", ticker, 3);
SimpleCacheManager cacheManager = new SimpleCacheManager();
cacheManager.setCaches(Collections.singletonList(bookCache));
return cacheManager;
}
private CaffeineCache buildCache(String name, Ticker ticker, int minutesToExpire) {
return new CaffeineCache(name, Caffeine.newBuilder().expireAfterWrite(minutesToExpire, TimeUnit.MINUTES)
.maximumSize(100).ticker(ticker).build());
}
#Bean
public Ticker ticker() {
return Ticker.systemTicker();
}
}
and my Kafka Consumer is as below,
#Autowired
CachingServiceImpl cachingService;
#KafkaListener(topics = "#{'${spring.kafka.consumer.topic}'}", concurrency = "#{'${spring.kafka.consumer.concurrentConsumers}'}", errorHandler = "#{'${spring.kafka.consumer.errorHandler}'}")
public void consume(Message<?> message, Acknowledgment acknowledgment,
#Header(KafkaHeaders.RECEIVED_TIMESTAMP) long createTime) {
logger.info("Recieved Message: " + message.getPayload());
try {
boolean approveTopic = false;
boolean duplicateRecord = false;
if (cachingService.isDuplicateCheck(declineRecord)) {
//do something with records
}
else
{
//do something with records
}
cachingService.putInCache(xmlJSONObj, declineRecord, time);
and my caching service is as below,
#Component
public class CachingServiceImpl {
private static final Logger logger = LoggerFactory.getLogger(CachingServiceImpl.class);
#Autowired
CacheManager cacheManager;
#Cacheable(value = "declineRecords", key = "#declineRecord", sync = true)
public String putInCache(JSONObject xmlJSONObj, String declineRecord, String time) {
logger.info("Record is Cached for 3 minutes interval check", declineRecord);
cacheManager.getCache("declineRecords").put(declineRecord, time);
return declineRecord;
}
public boolean isDuplicateCheck(String declineRecord) {
if (null != cacheManager.getCache("declineRecords").get(declineRecord)) {
return true;
}
return false;
}
}
But Each time a record comes in consumer my cache is always empty. Its not holding the records.
Modifications Done:
I have added Configuration file as below after going through the suggestions and more kind of R&D removed some of the earlier logic and now the caching is working as expected but duplicate check is failing when all the three consumers are sending the same records.
`
#Configuration
public class AppCacheManagerConfig {
public static Cache<String, Object> jsonCache =
Caffeine.newBuilder().expireAfterWrite(3, TimeUnit.MINUTES)
.maximumSize(10000).recordStats().build();
#Bean
public CacheLoader<Object, Object> cacheLoader() {
CacheLoader<Object, Object> cacheLoader = new CacheLoader<Object, Object>() {
#Override
public Object load(Object key) throws Exception {
return null;
}
#Override
public Object reload(Object key, Object oldValue) throws Exception {
return oldValue;
}
};
return cacheLoader;
}
`
Now i am using the above cache as manual put and get.
I guess you're trying to implement records deduplication for Kafka.
Here is the similar discussion:
https://github.com/spring-projects/spring-kafka/issues/80
Here is the current abstract class which you may extend to achieve the necessary result:
https://github.com/spring-projects/spring-kafka/blob/master/spring-kafka/src/main/java/org/springframework/kafka/listener/adapter/AbstractFilteringMessageListener.java
Your caching service is definitely incorrect: Cacheable annotation allows marking the data getters and setters, to add caching through AOP. While in the code you clearly implement some low-level cache updating logic of your own.
At least next possible changes may help you:
Remove #Cacheable. You don't need it because you work with cache manually, so it may be the source of conflicts (especially as soon as you use sync = true). If it helps, remove #EnableCaching as well - it enables support for cache-related Spring annotations which you don't need here.
Try removing Ticker bean with the appropriate parameters for other beans. It should not be harmful as per your configuration, but usually it's helpful only for tests, no need to define it otherwise.
Double-check what is declineRecord. If it's a serialized object, ensure that serialization works properly.
Add recordStats() for cache and output stats() to log for further analysis.

Spring Boot Application cannot run test cases involving Two Databases correctly - either get detached entities or no inserts

I am trying to write a Spring Boot app that talks to two databases - primary which is read-write, and secondary which is read only.
This is also using spring-data-jpa for the repositories.
Roughly speaking this giude describes what I am doing: https://www.baeldung.com/spring-data-jpa-multiple-databases
And this documentation from spring:
https://docs.spring.io/spring-boot/docs/current/reference/html/howto-data-access.html#howto-two-datasources
I am having trouble understanding some things - I think about the transaction managers - which is making my program error out either during normal operation, or during unit tests.
I am running into two issues, with two different TransactionManagers that I do not understand well
1)
When I use JPATransactionManager, my secondary entities become detached between function calls. This happens in my application running in full Spring boot Tomcat, and when running the JUnit test as SpringRunner.
2)
When I use DataSourceTransactionManager which was given in some tutorial my application works correctly, but when I try to run a test case with SpringRunner, without running the full server, spring/hibernate will not perform any inserts or updates on the primaryDataSource.
--
Here is a snippet of code for (1) form a service class.
#Transactional
public List<PrimaryGroup> synchronizeAllGroups(){
Iterable<SecondarySite> secondarySiteList = secondarySiteRepo.findAll();
List<PrimaryGroup> allUserGroups = new ArrayList<PrimaryGroup>(0);
for( SecondarySite secondarySite: secondarySiteList){
allUserGroups.addAll(synchronizeSiteGroups( secondarySite.getName(), secondarySite));
}
return allUserGroups;
}
#Transactional
public List<PrimaryGroup> synchronizeSiteGroups(String sitename, SecondarySite secondarySite){
// GET all secondary groups
if( secondarySite == null){
secondarySite = secondarySiteRepo.getSiteByName(sitename);
}
logger.debug("synchronizeGroups started - siteId:{}", secondarySite.getLuid().toString());
List<SecondaryGroup> secondaryGroups = secondarySite.getGroups();// This shows the error because secondarySite passed in is detached
List<PrimaryGroup> primaryUserGroups = primaryGroupRepository.findAllByAppName(sitename);
...
// modify existingUserGroups to have new data from secondary
...
primaryGroupRepository.save(primaryUserGroups );
logger.debug("synchronizeGroups complete");
return existingUserGroups;
}
I am pretty sure I understand what is going on with the detached entities with JPATransactionManager -- When populateAllUsers calls populateSiteUser, it is only carrying over the primaryTransactionManager and the secondary one gets left out, so the entities become detached. I can probably work around that, but I'd like to see if there is any way to have this work, without putting all calls to secondary* into a seperate service layer, that returns non-managed entities.
--
Here is a snippet of code for (2) from a controller class
#GetMapping("synchronize/secondary")
public String synchronizesecondary() throws UnsupportedEncodingException{
synchronizeAllGroups(); // pull all the groups
synchronizeAllUsers(); // pull all the users
synchronizeAllUserGroupMaps(); // add the mapping table
return "true";
}
This references that same synchronizeAllGroups from above. But when I am useing DataSourceTransactionManager I do not get that detached entity error.
What I do get instead, is that the primaryGroupRepository.save(primaryUserGroups ); call does not generate any insert or update statement - when running a JUnit test that calls the controller directly. So when synchronizeAllUserGroupMaps gets called, primaryUserRepository.findAll() returns 0 rows, and same with primaryGroupRepository
That is to say - it works when I run this test case:
#RunWith(SpringRunner.class)
#SpringBootTest(classes=app.DApplication.class, properties={"spring.profiles.active=local,embedded"})
#AutoConfigureMockMvc
public class MockTest {
#Autowired
private MockMvc mockMvc;
#Test
public void shouldSync() throws Exception {
this.mockMvc.perform(get("/admin/synchronize/secondary")).andDo(print()).andExpect(status().isOk());
}
}
But it does not do any inserts or updates when I run this test case:
#RunWith(SpringRunner.class)
#SpringBootTest(classes=app.DApplication.class, properties={"spring.profiles.active=local,embedded"}, webEnvironment=WebEnvironment.MOCK)
#AutoConfigureMockMvc
public class ControllerTest {
#Autowired AdminController adminController;
#Test
public void shouldSync() throws Exception {
String o = adminController.synchronizesecondary();
}
}
Here are the two configuration classes
Primary:
#Configuration
#EnableTransactionManagement
#EntityScan(basePackageClasses = app.primary.dao.BasePackageMarker.class )
#EnableJpaRepositories(
transactionManagerRef = "dataSourceTransactionManager",
entityManagerFactoryRef = "primaryEntityManagerFactory",
basePackageClasses = { app.primary.dao.BasePackageMarker.class }
)
public class PrimaryConfig {
#Bean(name = "primaryDataSourceProperties")
#Primary
#ConfigurationProperties("app.primary.datasource")
public DataSourceProperties primaryDataSourceProperties() {
return new DataSourceProperties();
}
#Bean(name = "primaryDataSource")
#Primary
public DataSource primaryDataSourceEmbedded() {
return primaryDataSourceProperties().initializeDataSourceBuilder().build();
}
#Bean
#Primary
public LocalContainerEntityManagerFactoryBean primaryEntityManagerFactory(
EntityManagerFactoryBuilder builder,
#Qualifier("primaryDataSource") DataSource primaryDataSource) {
return builder
.dataSource(primaryDataSource)
.packages(app.primary.dao.BasePackageMarker.class)
.persistenceUnit("primary")
.build();
}
#Bean
#Primary
public DataSourceTransactionManager dataSourceTransactionManager( #Qualifier("primaryDataSource") DataSource primaryDataSource) {
DataSourceTransactionManager txm = new DataSourceTransactionManager(primaryDataSource);
return txm;
}
}
And Secondary:
#Configuration
#EnableTransactionManagement
#EntityScan(basePackageClasses=app.secondary.dao.BasePackageMarker.class ) /* scan secondary as secondary database */
#EnableJpaRepositories(
transactionManagerRef = "secondaryTransactionManager",
entityManagerFactoryRef = "secondaryEntityManagerFactory",
basePackageClasses={app.secondary.dao.BasePackageMarker.class}
)
public class SecondaryConfig {
private static final Logger log = LoggerFactory.getLogger(SecondaryConfig.class);
#Bean(name = "secondaryDataSourceProperties")
#ConfigurationProperties("app.secondary.datasource")
public DataSourceProperties secondaryDataSourceProperties() {
return new DataSourceProperties();
}
#Bean(name = "secondaryDataSource")
public DataSource secondaryDataSourceEmbedded() {
return secondaryDataSourceProperties().initializeDataSourceBuilder().build();
}
#Bean
public LocalContainerEntityManagerFactoryBean secondaryEntityManagerFactory(
EntityManagerFactoryBuilder builder,
#Qualifier("secondaryDataSource") DataSource secondaryDataSource) {
return builder
.dataSource(secondaryDataSource)
.packages(app.secondary.dao.BasePackageMarker.class)
.persistenceUnit("secondary")
.build();
}
#Bean
public DataSourceTransactionManager secondaryTransactionManager( #Qualifier("secondaryDataSource") DataSource secondaryDataSource) {
DataSourceTransactionManager txm = new DataSourceTransactionManager(secondaryDataSource);
return txm;
}
}
In my real application, the secondary data source - since it is read-only - is used during real run time, and during the unit test I am writing.
I have been having trouble getting spring to initialize both data sources, so I have not attached a complete example.
thanks for any insight people an give me.
Edit: I have read some things that say to use Jta Transaction Manager when using multiple databases, and I have tried that. I get an error when it tries to run the transaction on my second read-only database when I go to commit to the first database
Caused by: org.postgresql.util.PSQLException: ERROR: prepared transactions are disabled
Hint: Set max_prepared_transactions to a nonzero value.
In my case, I cannot set that, because the database is a read-only datbase provided by a vendor, we cannot change anything, and I really sho9udn't be trying to include this database as part of transactions, I just want to be able to call both databases in one service call.

Spring Jdbc inbound channel adapter

I'm trying for a program in spring which does DB poll and selects that record to read. I see example for xml but i would like to know how do we do in java config. Can someone show me an example ?
You need JdbcPollingChannelAdapter #Bean definition, marked with the #InboundChannelAdapter:
#Bean
#InboundChannelAdapter(value = "fooChannel", poller = #Poller(fixedDelay="5000"))
public MessageSource<?> storedProc(DataSource dataSource) {
return new JdbcPollingChannelAdapter(dataSource, "SELECT * FROM foo where status = 0");
}
http://docs.spring.io/spring-integration/docs/4.3.11.RELEASE/reference/html/overview.html#programming-tips

Resources