Hi I'm currently running a task which need to connect to the DB for about an hour(requirement).
I'm using a spring boot application and at a #COnfiguration I'm setting
#Bean
public DataSource dataSource(final DatabaseProperties databaseProperties) {
final var config = new HikariConfig();
config.setDriverClassName(org.postgresql.Driver.class.getName());
config.setJdbcUrl("my url");
config.setPassword("my password");
config.setUsername("my user name");
config.setInitializationFailTimeout(30000);
config.setConnectionTimeout(36000000);
config.setIdleTimeout(36000000);
config.setMaxLifetime(36000000);
config.setKeepaliveTime(15000000);
config.setMaximumPoolSize(100);
config.setValidationTimeout(36000000);
return new HikariDataSource(config);
}
However, when this time consuming task is called within few seconds it shuts down by giving
{
"timestamp": "2022-06-01 12:08:52.820",
"level": "INFO",
"thread": "SpringApplicationShutdownHook",
"logger": "com.zaxxer.hikari.HikariDataSource",
"message": "HikariPool-1 - Shutdown initiated...",
"context": "default"
}
There are answers for Hikari to shut down as the app starts but I couldn't find any answer for this.
Please tell me what I'm doing wrong.
Related
I have a spring boot rest api which uploads some documents on firestore DB. Problem is when I am running locally it is working absolutely fine without causing any issue. but when I am packaging it as a jar and uploading in AWS beanstalk. That endpoint giving below error response.
{
"timestamp": "2022-11-01T13:53:21.121+00:00",
"status": 500,
"error": "Internal Server Error",
"message": "FirebaseApp with name [DEFAULT] doesn't exist. ",
}
This is how I am reading the firebase service account (file name is serviceaccount.json) file which is located in src/main/resources folder and I have
firebase.credential.resource=serviceaccount.json entry in my application.properties
#Value("${firebase.credential.resource}")
String resourcePath;
#PostConstruct
public void initialize() {
try {
Resource resource = new ClassPathResource(resourcePath);
//FileInputStream serviceAccount = new FileInputStream(resource.getFile());
FirebaseOptions options = new FirebaseOptions.Builder()
.setCredentials(GoogleCredentials.fromStream(resource.getInputStream()))
.build();
if (FirebaseApp.getApps().isEmpty()) {
FirebaseApp.initializeApp(options);
}
} catch (Exception e) {
e.printStackTrace();
}
}
I wanted to have two health-related endpoints in my spring boot application one simple and other one in more detailed, for example -
the simple API -
GET http://localhost:8080/actuator/health
{
"status": "UP"
}
the detailed API -
GET http://localhost:8080/actuator/health-detailed
{
"status": "UP",
"components": {
"custom": {
"status": "UP"
},
"diskSpace": {
"status": "UP",
"details": {
"total": 254971625472,
"free": 60132696064,
"threshold": 10485760,
"exists": true
}
},
"ping": {
"status": "UP"
}
}
}
But I am not able to figure out how to achieve this with Spring boot actuator.
Yes you can have.
Health Indicators in Spring Boot
Out of the box, Spring Boot registers many HealthIndicators to report the healthiness of a particular application aspect.
Some of those indicators are almost always registered, such as DiskSpaceHealthIndicator or PingHealthIndicator. The former reports the current state of the disk and the latter serves as a ping endpoint for the application.
On the other hand, Spring Boot registers some indicators conditionally. That is if some dependencies are on the classpath or some other conditions are met, Spring Boot might register a few other HealthIndicators, too. For instance, if we're using relational databases, then Spring Boot registers DataSourceHealthIndicator. Similarly, it'll register CassandraHealthIndicator if we happen to use Cassandra as our data store.
In order to inspect the health status of a Spring Boot application, we can call the /actuator/health endpoint. This endpoint will report an aggregated result of all registered HealthIndicators.
Custom HealthIndicators
In addition to the built-in ones, we can register custom HealthIndicators to report the health of a component or subsystem. In order to that, all we have to do is to register an implementation of the HealthIndicator interface as a Spring bean.
#Component
public class RandomHealthIndicator implements HealthIndicator {
#Override
public Health health() {}
}
add this to your pom.xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
and then create this class to add your custom health checks:
#Component
public class CustomHealthCheck extends AbstractHealthIndicator {
#Override
protected void doHealthCheck(Health.Builder bldr) throws Exception {
// TODO implement some check
boolean running = true;
if (running) {
bldr.up();
} else {
bldr.down();
}
}
}
and first status in response of /actuator/health indicates brief status that you want.
{
"status": "UP",
"components": {
"db": {
"status": "UP",
"details": {
"database": "Oracle",
"validationQuery": "isValid()"
}
},
"diskSpace": {
"status": "UP",
"details": {
"total": 268238319616,
"free": 40618704896,
"threshold": 10485760,
"exists": true
}
}
}
I am using Spring integration sftp to put files on a remote server and below is configuration.
<spring-integration.version>5.2.5.RELEASE</spring-integration.version>
I have configurated #MessagingGateway.
#MessagingGateway
public interface SftpMessagingGateway {
#Gateway(requestChannel = "sftpOutputChannel")
void sendToFTP(Message<?> message);
}
I have configured the MessageHandler as below,
#Bean
#ServiceActivator(inputChannel = "sftpOutputChannel" )
public MessageHandler genericOutboundhandler() {
SftpMessageHandler handler = new SftpMessageHandler(outboundTemplate(), FileExistsMode.APPEND);
handler.setRemoteDirectoryExpressionString("headers['remote_directory']");
handler.setFileNameGenerator((Message<?> message) -> ((String) message.getHeaders().get(Constant.FILE_NAME_KEY)));
handler.setUseTemporaryFileName(false);
return handler;
}
I have configured SftpRemoteFileTemplate as below
private SftpRemoteFileTemplate outboundTemplate;
public SftpRemoteFileTemplate outboundTemplate(){
if (outboundTemplate == null) {
outboundTemplate = new SftpRemoteFileTemplate(sftpSessionFactory());
}
return outboundTemplate;
}
This is the configuration for SessionFactory
public SessionFactory<LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory();
factory.setHost(host);
factory.setPort(port);
factory.setUser(username);
factory.setPassword(password);
factory.setAllowUnknownKeys(true);
factory.setKnownHosts(host);
factory.setSessionConfig(configPropertiesOutbound());
CachingSessionFactory<LsEntry> csf = new CachingSessionFactory<LsEntry>(factory);
csf.setSessionWaitTimeout(1000);
csf.setPoolSize(10);
csf.setTestSession(true);
return csf;
}
I have configured all this in one of the service.
Now the problem is,
Sometimes the entire operation takes more than 15 min~ specially if the service is ideal for few hours and I am not sure what is causing this issue.
It looks like it is spending time on getting the active session from CachedSessionFactory the after operations are pretty fast below is the snap from one of the tool where I have managed to capture the time.
It usually takes few miliseconds to transfer files.
I have recently made below changes but before that as well I was getting the same issue,
I have set isShareSession to false earlier it was DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
There was no pool size I have set it to 10
I think I have configured something incorrectly and that's why I end up piling connection ? Or there is something else ?
Observation :
The time taking to complete the operation is somewhat similar all the time when issue occurs i.e 938000 milliseconds +
If I restart the application daily it works perfectly fine.
I see strange behavior in Intellij and JBoss server.
Short info about application:
- spring 4.3.1
- jersey 1.18.1
- jboss EAP 6.4.0
I have an issue with marshaling, so I tried to dig deeper and try hints from: https://docs.jboss.org/resteasy/docs/1.0.0.GA/userguide/html/Content_Marshalling_Providers.html
So I created class:
#Named
#Provider
#Primary
#Produces(MediaType.APPLICATION_JSON)
public class JerseyJAXBConfiguration implements ContextResolver<JAXBContext> {
private JAXBContext context;
private Class[] types = { SomeClass.class};
public JerseyJAXBConfiguration() throws Exception {
JSONConfiguration.MappedBuilder builder = JSONConfiguration.mapped();
builder.arrays("invite");
builder.rootUnwrapping(false);
this.context = new JSONJAXBContext(builder.build(), types);
}
public JAXBContext getContext(Class<?> objectType) {
for (Class type : types) {
if (type == objectType) {
return context;
}
}
return null;
}
}
And I have two JBoss run configurations in IntelliJ. First that deploy simple 'war' and second that deploy 'war exploded'. And here is a strange thing. Because in both cases this bean is loaded when the application is starting, but only for a case without exploded 'getContext' method is executed. In both cases, everything is the same. Nothing in code is changed between runs.
Can someone explain to me why we have this different? I would like to know what JAXB implementation is executed in a case for 'war exploded'.
The main reason why I looking at it is because Json that is created by web service also is different in metioned cases. On have "named" array second don't have.
In case 'war':
{"functionGroupAutoFill": [
{
"desc": "0. General",
"id": "0"
},
{
"desc": "00. General",
"id": "00"
},
...
]}
In case 'war exploded':
[
{
"desc": "0. General",
"id": "0"
},
{
"desc": "00. General",
"id": "00"
},
...
]
I am using Spring Cloud AWS (1.0.1.RELEASE) with Spring Boot to run a SQS consumer. The application runs fine, but when it looses network connection (for instance if I switch my WIFI off on my laptop when it runs on it), I see errors on the console and the application never recovers. It just hangs there and does not reconnect after the network becomes available. I have to kill it and bring it up. How do I force it to recover by itself?
// Spring Boot entry point:
public static void main(String[] args) {
SpringApplication.run(MyConsumerConfiguration.class, args);
}
// Message Listener (A different class)
#MessageMapping(value = "myLogicalQueueName" )
public void receive(MyPOJO object) {
}
The error I see at console:
Exception in thread "simpleMessageListenerContainer-1" com.amazonaws.AmazonClientException: Unable to execute HTTP request: sqs.us-east-1.amazonaws.com
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:473)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:297)
at com.amazonaws.services.sqs.AmazonSQSClient.invoke(AmazonSQSClient.java:2422)
at com.amazonaws.services.sqs.AmazonSQSClient.receiveMessage(AmazonSQSClient.java:1130)
at com.amazonaws.services.sqs.AmazonSQSAsyncClient$23.call(AmazonSQSAsyncClient.java:1678)
at com.amazonaws.services.sqs.AmazonSQSAsyncClient$23.call(AmazonSQSAsyncClient.java:1676)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745
I just figured out the problem why SQS is not able to reconnect after network connection lost.
Actually seems to be a problem in current Spring AWS implementation of org.springframework.cloud.aws.messaging.listener.SimpleMessageListenerContainer.java
private class AsynchronousMessageListener implements Runnable {
private final QueueAttributes queueAttributes;
private final String logicalQueueName;
private AsynchronousMessageListener(String logicalQueueName, QueueAttributes queueAttributes) {
this.logicalQueueName = logicalQueueName;
this.queueAttributes = queueAttributes;
}
#Override
public void run() {
while (isRunning()) {
ReceiveMessageResult receiveMessageResult = getAmazonSqs().receiveMessage(this.queueAttributes.getReceiveMessageRequest());
CountDownLatch messageBatchLatch = new CountDownLatch(receiveMessageResult.getMessages().size());
for (Message message : receiveMessageResult.getMessages()) {
if (isRunning()) {
MessageExecutor messageExecutor = new MessageExecutor(this.logicalQueueName, message, this.queueAttributes);
getTaskExecutor().execute(new SignalExecutingRunnable(messageBatchLatch, messageExecutor));
} else {
break;
}
}
try {
messageBatchLatch.await();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
}
Above code spins up a new thread which does the polling to SQS queue to grab messages. Once network connection is dropped getAmazonSqs().receiveMessage(this.queueAttributes.getReceiveMessageRequest()) throws UnknownHostException, which is not handled in the code and causes thread termination.
So when network connection is established later on, there is no thread polling the queue to retrieve the data.
I have already raised a issue with Spring for this. Following is the link: https://github.com/spring-cloud/spring-cloud-aws/issues/82
Hope this explains it all.