I just want to understand kafka rebalance.This is my listener method. I have configured RetryTemplate with consumer factory to retry 20 times with a backoff delay of 20secs. I'm using spring-kafka 1.2.2(we are planning to upgrade client) and using manual acks.
#KafkaListener(id = "${kafka.listener-id}", topics = "${kafka.topic}")
public void listen(final ConsumerRecord<String, String> consumerRecord,
final Acknowledgment acknowledgment) throws ServiceResponseException {}
if (true){
System.out.println("throwing exception ");
throw new RuntimeException();
}
try {
acknowledgment.acknowledge();
LOGGER.info("Kafka acknowledgment sent for Transaction ID:");
} catch (Exception e) {
LOGGER.info("Exception encountered when acking record with transaction id: {}");
}
}
I have 2 workers that has concurrency 2 each. On kafka I have 3 partitions. I started one worker and 3 partitions are assigned to worker1. And then I sent a message. And a RuntimeException is thrown in listener and this happens for 20 times with 20 sec delay. Then when I started worker2 kafka rebalance triggers but partitions are not yet assigned. worker1 fails with message "Error while processing: ConsumerRecord"(after getContainerProperties().getShutdownTimeout())and and then all consumers join the group. And now the same message is delivered to worker2.
1) And this is working as I needed it to work. But I have a question, when a rebalance triggers why is partition assignment not happening immediately instead it waits for the worker1 to stop completely(waiting for getContainerProperties().getShutdownTimeout()) and then all consumers from worker1 and worker2 joins groups.
2) And during rebalance I observed that consumers stop calling poll(from logs below). Is it true?
Trace logs from worker 1:
2018-09-23 13:52:53.259 TRACE 6384 --- [ listener-2-L-1] essageListenerContainer$ListenerConsumer : No records to process
2018-09-23 13:52:53.259 TRACE 6384 --- [ listener-0-L-1] essageListenerContainer$ListenerConsumer : No records to process
2018-09-23 13:52:53.384 DEBUG 6384 --- [ listener-1-C-1] essageListenerContainer$ListenerConsumer : Received: 0 records
2018-09-23 13:52:53.384 TRACE 6384 --- [ listener-1-C-1] essageListenerContainer$ListenerConsumer : Polling (paused=false)...
2018-09-23 13:52:53.977 DEBUG 6384 --- [ listener-0-C-1] essageListenerContainer$ListenerConsumer : Received: 0 records
2018-09-23 13:52:53.977 TRACE 6384 --- [ listener-0-C-1] essageListenerContainer$ListenerConsumer : Polling (paused=false)...
2018-09-23 13:52:54.008 DEBUG 6384 --- [ listener-2-C-1] essageListenerContainer$ListenerConsumer : Received: 0 records
2018-09-23 13:52:54.008 TRACE 6384 --- [ listener-2-C-1] essageListenerContainer$ListenerConsumer : Polling (paused=false)...
2018-09-23 13:52:54.023 INFO 6384 --- [ listener-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [messages-0] for group mris-group
2018-09-23 13:52:54.023 TRACE 6384 --- [ listener-0-C-1] essageListenerContainer$ListenerConsumer : Received partition revocation notification, and will stop the invoker.
2018-09-23 13:52:54.023 DEBUG 6384 --- [ listener-0-C-1] essageListenerContainer$ListenerConsumer : Stopping invoker
2018-09-23 13:52:54.081 INFO 6384 --- [ listener-1-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [messages-1] for group mris-group
2018-09-23 13:52:54.081 TRACE 6384 --- [ listener-1-C-1] essageListenerContainer$ListenerConsumer : Received partition revocation notification, and will stop the invoker.
2018-09-23 13:52:54.081 DEBUG 6384 --- [ listener-1-C-1] essageListenerContainer$ListenerConsumer : Stopping invoker
2018-09-23 13:52:54.241 INFO 6384 --- [ listener-2-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [messages-2] for group mris-group
2018-09-23 13:52:54.241 TRACE 6384 --- [ listener-2-C-1] essageListenerContainer$ListenerConsumer : Received partition revocation notification, and will stop the invoker.
2018-09-23 13:52:54.241 DEBUG 6384 --- [ listener-2-C-1] essageListenerContainer$ListenerConsumer : Stopping invoker
2018-09-23 13:52:54.264 DEBUG 6384 --- [ listener-2-C-1] essageListenerContainer$ListenerConsumer : Invoker stopped
2018-09-23 13:52:54.264 DEBUG 6384 --- [ listener-0-C-1] essageListenerContainer$ListenerConsumer : Invoker stopped
2018-09-23 13:52:54.264 INFO 6384 --- [ listener-2-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[messages-2]
2018-09-23 13:52:54.264 INFO 6384 --- [ listener-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[messages-0]
2018-09-23 13:52:54.264 INFO 6384 --- [ listener-2-C-1] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group mris-group
2018-09-23 13:52:54.265 INFO 6384 --- [ listener-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group mris-group
2018-09-23 13:53:09.355 DEBUG 6384 --- [ listener-1-L-1] .a.RecordMessagingMessageListenerAdapter : Processing [GenericMessage [payload=<removed>]
throwing exception
2018-09-23 13:53:24.083 DEBUG 6384 --- [ listener-1-C-1] essageListenerContainer$ListenerConsumer : Interrupting invoker
2018-09-23 13:53:24.083 DEBUG 6384 --- [ listener-1-C-1] essageListenerContainer$ListenerConsumer : Invoker stopped
2018-09-23 13:53:24.085 INFO 6384 --- [ listener-1-C-1] essageListenerContainer$ListenerConsumer : Invoker timed out while waiting for shutdown and will be canceled.
2018-09-23 13:53:24.085 INFO 6384 --- [ listener-1-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[messages-1]
2018-09-23 13:53:24.085 INFO 6384 --- [ listener-1-C-1] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group mris-group
2018-09-23 13:53:24.101 ERROR 6384 --- [ listener-1-L-1] o.s.kafka.listener.LoggingErrorHandler : Error while processing: ConsumerRecord(topic = messages, partition = 1, offset = 0, CreateTime = 1537725149052, checksum = 3567644394, serialized key size = 27, serialized value size = 1952, key = test_hotfix1#test.com, value = <removed>])
org.springframework.retry.backoff.BackOffInterruptedException: Thread interrupted while sleeping; nested exception is java.lang.InterruptedException: sleep interrupted
at org.springframework.retry.backoff.FixedBackOffPolicy.doBackOff(FixedBackOffPolicy.java:86) ~[spring-retry-1.2.0.RELEASE.jar:na]
at org.springframework.retry.backoff.StatelessBackOffPolicy.backOff(StatelessBackOffPolicy.java:36) ~[spring-retry-1.2.0.RELEASE.jar:na]
at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:305) ~[spring-retry-1.2.0.RELEASE.jar:na]
at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:179) ~[spring-retry-1.2.0.RELEASE.jar:na]
at org.springframework.kafka.listener.adapter.RetryingAcknowledgingMessageListenerAdapter.onMessage(RetryingAcknowledgingMessageListenerAdapter.java:73) ~[spring-kafka-1.2.2.RELEASE.jar:na]
at org.springframework.kafka.listener.adapter.RetryingAcknowledgingMessageListenerAdapter.onMessage(RetryingAcknowledgingMessageListenerAdapter.java:39) ~[spring-kafka-1.2.2.RELEASE.jar:na]
at org.springframework.kafka.listener.adapter.FilteringAcknowledgingMessageListenerAdapter.onMessage(FilteringAcknowledgingMessageListenerAdapter.java:55) ~[spring-kafka-1.2.2.RELEASE.jar:na]
at org.springframework.kafka.listener.adapter.FilteringAcknowledgingMessageListenerAdapter.onMessage(FilteringAcknowledgingMessageListenerAdapter.java:34) ~[spring-kafka-1.2.2.RELEASE.jar:na]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:794) [spring-kafka-1.2.2.RELEASE.jar:na]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:738) [spring-kafka-1.2.2.RELEASE.jar:na]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.access$2200(KafkaMessageListenerContainer.java:245) [spring-kafka-1.2.2.RELEASE.jar:na]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer$ListenerInvoker.run(KafkaMessageListenerContainer.java:1031) [spring-kafka-1.2.2.RELEASE.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_162]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_162]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_162]
Caused by: java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method) [na:1.8.0_162]
at org.springframework.retry.backoff.ThreadWaitSleeper.sleep(ThreadWaitSleeper.java:30) ~[spring-retry-1.2.0.RELEASE.jar:na]
at org.springframework.retry.backoff.FixedBackOffPolicy.doBackOff(FixedBackOffPolicy.java:83) ~[spring-retry-1.2.0.RELEASE.jar:na]
... 14 common frames omitted
2018-09-23 13:53:24.101 INFO 6384 --- [ listener-1-C-1] o.a.k.c.c.internals.AbstractCoordinator : Successfully joined group mris-group with generation 10
2018-09-23 13:53:24.101 INFO 6384 --- [ listener-2-C-1] o.a.k.c.c.internals.AbstractCoordinator : Successfully joined group mris-group with generation 10
2018-09-23 13:53:24.102 INFO 6384 --- [ listener-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : Successfully joined group mris-group with generation 10
2018-09-23 13:53:24.102 INFO 6384 --- [ listener-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Setting newly assigned partitions [messages-0] for group mris-group
2018-09-23 13:53:24.102 INFO 6384 --- [ listener-1-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Setting newly assigned partitions [messages-2] for group mris-group
2018-09-23 13:53:24.102 INFO 6384 --- [ listener-2-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Setting newly assigned partitions [] for group mris-group
2018-09-23 13:53:24.103 DEBUG 6384 --- [ listener-2-C-1] essageListenerContainer$ListenerConsumer : Committing: {}
2018-09-23 13:53:24.103 INFO 6384 --- [ listener-2-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned:[]
2018-09-23 13:53:24.103 DEBUG 6384 --- [ listener-0-C-1] essageListenerContainer$ListenerConsumer : Committing: {messages-0=OffsetAndMetadata{offset=0, metadata=''}}
2018-09-23 13:53:24.104 DEBUG 6384 --- [ listener-1-C-1] essageListenerContainer$ListenerConsumer : Committing: {messages-2=OffsetAndMetadata{offset=0, metadata=''}}
2018-09-23 13:53:24.106 INFO 6384 --- [ listener-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned:[messages-0]
2018-09-23 13:53:24.107 DEBUG 6384 --- [ listener-0-C-1] essageListenerContainer$ListenerConsumer : Received: 0 records
2018-09-23 13:53:24.107 TRACE 6384 --- [ listener-0-C-1] essageListenerContainer$ListenerConsumer : Polling (paused=false)...
2018-09-23 13:53:24.108 INFO 6384 --- [ listener-1-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned:[messages-2]
2018-09-23 13:53:24.108 DEBUG 6384 --- [ listener-1-C-1] essageListenerContainer$ListenerConsumer : Received: 0 records
2018-09-23 13:53:24.108 TRACE 6384 --- [ listener-1-C-1] essageListenerContainer$ListenerConsumer : Polling (paused=false)...
2018-09-23 13:53:24.207 DEBUG 6384 --- [ listener-2-C-1] essageListenerContainer$ListenerConsumer : Received: 0 records
2018-09-23 13:53:24.207 TRACE 6384 --- [ listener-2-C-1] essageListenerContainer$ListenerConsumer : Polling (paused=false)...
2018-09-23 13:53:25.111 TRACE 6384 --- [ listener-0-L-2] essageListenerContainer$ListenerConsumer : No records to process
Trace logs from worker2:
2018-09-23 13:53:24.102 INFO 6401 --- [ listener-2-C-1] o.a.k.c.c.internals.AbstractCoordinator : Successfully joined group mris-group with generation 10
2018-09-23 13:53:24.104 INFO 6401 --- [ listener-1-C-1] o.a.k.c.c.internals.AbstractCoordinator : Successfully joined group mris-group with generation 10
2018-09-23 13:53:24.105 INFO 6401 --- [ listener-1-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Setting newly assigned partitions [] for group mris-group
2018-09-23 13:53:24.105 INFO 6401 --- [ listener-2-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Setting newly assigned partitions [] for group mris-group
2018-09-23 13:53:24.105 DEBUG 6401 --- [ listener-2-C-1] essageListenerContainer$ListenerConsumer : Committing: {}
2018-09-23 13:53:24.105 DEBUG 6401 --- [ listener-1-C-1] essageListenerContainer$ListenerConsumer : Committing: {}
2018-09-23 13:53:24.105 INFO 6401 --- [ listener-2-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned:[]
2018-09-23 13:53:24.105 INFO 6401 --- [ listener-1-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned:[]
2018-09-23 13:53:24.106 INFO 6401 --- [ listener-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Setting newly assigned partitions [messages-1] for group mris-group
2018-09-23 13:53:24.111 DEBUG 6401 --- [ listener-0-C-1] essageListenerContainer$ListenerConsumer : Committing: {messages-1=OffsetAndMetadata{offset=0, metadata=''}}
2018-09-23 13:53:24.115 INFO 6401 --- [ listener-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned:[messages-1]
2018-09-23 13:53:24.118 DEBUG 6401 --- [ listener-0-C-1] essageListenerContainer$ListenerConsumer : Received: 0 records
2018-09-23 13:53:24.118 TRACE 6401 --- [ listener-0-C-1] essageListenerContainer$ListenerConsumer : Polling (paused=false)...
2018-09-23 13:53:24.189 DEBUG 6401 --- [ listener-0-C-1] essageListenerContainer$ListenerConsumer : Received: 1 records
2018-09-23 13:53:24.189 TRACE 6401 --- [ listener-0-C-1] essageListenerContainer$ListenerConsumer : Polling (paused=false)...
2018-09-23 13:53:24.202 TRACE 6401 --- [ listener-0-L-1] essageListenerContainer$ListenerConsumer : Processing ConsumerRecord(topic = messages, partition = 1, offset = 0, CreateTime = 1537725149052, checksum = 3567644394, serialized key size = 27, serialized value size = 1952, key = test_hotfix1#test.com, value = <removed>)
2018-09-23 13:53:24.209 DEBUG 6401 --- [ listener-1-C-1] essageListenerContainer$ListenerConsumer : Received: 0 records
2018-09-23 13:53:24.209 DEBUG 6401 --- [ listener-2-C-1] essageListenerContainer$ListenerConsumer : Received: 0 records
2018-09-23 13:53:24.209 TRACE 6401 --- [ listener-1-C-1] essageListenerContainer$ListenerConsumer : Polling (paused=false)...
2018-09-23 13:53:24.210 TRACE 6401 --- [ listener-2-C-1] essageListenerContainer$ListenerConsumer : Polling (paused=false)...
2018-09-23 13:53:24.216 DEBUG 6401 --- [ listener-0-L-1] .a.RecordMessagingMessageListenerAdapter : Processing [GenericMessage [payload=<removed>]
throwing exception
2018-09-23 13:53:25.194 DEBUG 6401 --- [ listener-0-C-1] essageListenerContainer$ListenerConsumer : Received: 0 records
2018-09-23 13:53:25.194 TRACE 6401 --- [ listener-0-C-1] essageListenerContainer$ListenerConsumer : Polling (paused=false)...
Versions prior to 1.3 had a very complicated threading model to avoid rebalancing due to a slow listener. KIP-62 enabled us to use a much simpler threading model in 1.3 and later.
1.2.x is no longer supported, and I don't have the time (or inclination) to go back to figure out what happened. Please upgrade to 1.3.7 (or even better, 2.1.10).
Related
HikariPool-1 - Exception during pool initialization when trying to run spring-boot application. While running mvn spring-boot:run command, the logs show repositories initialized and server starts and is in running state but throws exception during hikari pool initialization!
Able to connect to the DB using pgAdmin but not able to connect from the application. The code from application.properties is below.
spring.servlet.multipart.max-file-size=10MB
spring.servlet.multipart.max-request-size=10MB
spring.datasource.url= jdbc:postgresql:/34.93.135.89:5433/mtt_04_dec_22
spring.datasource.username= postgres
spring.datasource.password= justdoit#mtt
spring.jpa.hibernate.ddl-auto= none
spring.datasource.testWhileIdle=true
spring.datasource.test-on-borrow=true
spring.datasource.hikari.connectionTimeout=30000
spring.datasource.hikari.maxLifetime=60000
spring.datasource.hikari.maximum-pool-size=30
spring.jpa.open-in-view=false
And Hikari debug logs
PersistenceUnitInfo [name: default]
2022-12-02 11:30:24.569 INFO 14176 --- [ task-1] org.hibernate.Version : HHH000412: Hibernate ORM core version 5.4.15.Final
2022-12-02 11:30:24.791 INFO 14176 --- [ task-1] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {5.1.0.Final}
2022-12-02 11:30:24.947 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : HikariPool-1 - configuration:
2022-12-02 11:30:24.950 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : allowPoolSuspension.............false
2022-12-02 11:30:24.950 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : autoCommit......................true
2022-12-02 11:30:24.951 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : catalog.........................none
2022-12-02 11:30:24.951 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : connectionInitSql...............none
2022-12-02 11:30:24.951 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : connectionTestQuery.............none
2022-12-02 11:30:24.952 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : connectionTimeout...............30000
2022-12-02 11:30:24.952 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : dataSource......................none
2022-12-02 11:30:24.952 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : dataSourceClassName.............none
2022-12-02 11:30:24.952 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : dataSourceJNDI..................none
2022-12-02 11:30:24.953 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : dataSourceProperties............{password=<masked>}
2022-12-02 11:30:24.953 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : driverClassName................."org.postgresql.Driver"
2022-12-02 11:30:24.953 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : exceptionOverrideClassName......none
2022-12-02 11:30:24.954 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : healthCheckProperties...........{}
2022-12-02 11:30:24.954 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : healthCheckRegistry.............none
2022-12-02 11:30:24.954 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : idleTimeout.....................600000
2022-12-02 11:30:24.954 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : initializationFailTimeout.......1
2022-12-02 11:30:24.955 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : isolateInternalQueries..........false
2022-12-02 11:30:24.955 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : jdbcUrl.........................jdbc:postgresql:/35.200.211.39:5432/mtt_04_dec_22
2022-12-02 11:30:24.955 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : leakDetectionThreshold..........0
2022-12-02 11:30:24.955 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : maxLifetime.....................30000
2022-12-02 11:30:24.956 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : maximumPoolSize.................30
2022-12-02 11:30:24.956 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : metricRegistry..................none
2022-12-02 11:30:24.956 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : metricsTrackerFactory...........none
2022-12-02 11:30:24.956 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : minimumIdle.....................30
2022-12-02 11:30:24.956 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : password........................<masked>
2022-12-02 11:30:24.957 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : poolName........................"HikariPool-1"
2022-12-02 11:30:24.957 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : readOnly........................false
2022-12-02 11:30:24.957 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : registerMbeans..................false
2022-12-02 11:30:24.957 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : scheduledExecutor...............none
2022-12-02 11:30:24.957 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : schema..........................none
2022-12-02 11:30:24.958 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : threadFactory...................internal
2022-12-02 11:30:24.958 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : transactionIsolation............default
2022-12-02 11:30:24.958 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : username........................"postgres"
2022-12-02 11:30:24.959 DEBUG 14176 --- [ task-1] com.zaxxer.hikari.HikariConfig : validationTimeout...............5000
EXCEPTION ON CONSOLE LOOKS LIKE:
HikariPool-1 - Exception during pool initialization.
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.3.0.RELEASE)
2022-12-02 09:26:51.258 INFO 14888 --- [ main] org.sadisha.backend.service.Main : Starting Main on DESKTOP-991KNCS with PID 14888 (C:\mttbe\mtt-be\target\classes started by Admin in C:\mttbe\mtt-be)
2022-12-02 09:26:51.265 INFO 14888 --- [ main] org.sadisha.backend.service.Main : No active profile set, falling back to default profiles: default
2022-12-02 09:26:52.481 INFO 14888 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFERRED mode.
2022-12-02 09:26:52.715 INFO 14888 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 220ms. Found 11 JPA repository interfaces.
2022-12-02 09:26:53.776 INFO 14888 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2022-12-02 09:26:53.793 INFO 14888 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2022-12-02 09:26:53.794 INFO 14888 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.35]
2022-12-02 09:26:54.042 INFO 14888 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2022-12-02 09:26:54.043 INFO 14888 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 2655 ms
2022-12-02 09:26:54.178 INFO 14888 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2022-12-02 09:26:57.285 ERROR 14888 --- [ main] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:303) ~[postgresql-42.2.18.jar:42.2.18]
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51) ~[postgresql-42.2.18.jar:42.2.18]
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:225) ~[postgresql-42.2.18.jar:42.2.18]
at org.postgresql.Driver.makeConnection(Driver.java:465) ~[postgresql-42.2.18.jar:42.2.18]
at org.postgresql.Driver.connect(Driver.java:264) ~[postgresql-42.2.18.jar:42.2.18]
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) ~[HikariCP-3.4.5.jar:na]
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358) ~[HikariCP-3.4.5.jar:na]
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) ~[HikariCP-3.4.5.jar:na]
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477) ~[HikariCP-3.4.5.jar:na]
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:560) ~[HikariCP-3.4.5.jar:na]
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115) ~[HikariCP-3.4.5.jar:na]
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) ~[HikariCP-3.4.5.jar:na]
at org.springframework.jdbc.datasource.DataSourceUtils.fetchConnection(DataSourceUtils.java:158) ~[spring-jdbc-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:116) ~[spring-jdbc-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:79) ~[spring-jdbc-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:324) ~[spring-jdbc-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.boot.jdbc.EmbeddedDatabaseConnection.isEmbedded(EmbeddedDatabaseConnection.java:120) ~[spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE]
at org.springframework.boot.autoconfigure.jdbc.DataSourceInitializer.isEmbedded(DataSourceInitializer.java:137) ~[spring-boot-autoconfigure-2.3.0.RELEASE.jar:2.3.0.RELEASE]
at org.springframework.boot.autoconfigure.jdbc.DataSourceInitializer.isEnabled(DataSourceInitializer.java:129) ~[spring-boot-autoconfigure-2.3.0.RELEASE.jar:2.3.0.RELEASE]
at org.springframework.boot.autoconfigure.jdbc.DataSourceInitializer.createSchema(DataSourceInitializer.java:96) ~[spring-boot-autoconfigure-2.3.0.RELEASE.jar:2.3.0.RELEASE]
in your application.properties
change this :
spring.datasource.url=jdbc:postgresql:/34.93.135.89:5433/mtt_04_dec_22
postgresql defaulty adress is 5432 not 5433. probably this is the reason of error
here is you wrote: 34.93.135.89:5433
This is my pom.xml file
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchemainstance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.4.1</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.activeedge</groupId>
<artifactId>aetbigdatasoluions</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>AETProcessor</name>
<description> Project for Data processing</description>
<properties>
<java.version>11</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-mail</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
<exclusion>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-validation</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.camel/camel-spark-starter -->
<!-- <dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-spark-starter</artifactId>
<version>3.0.0-RC3</version>
</dependency> -->
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.12</artifactId>
<version>3.0.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-sql -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.12</artifactId>
<version>3.0.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.crealytics</groupId>
<artifactId>spark-excel_2.12</artifactId>
<version>0.13.1</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<excludes>
<exclude>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</exclude>
</excludes>
</configuration>
</plugin>
</plugins>
</build>
</project>
This is my class file
package com.aet.service;
import java.util.Properties;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SaveMode;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.SparkSession.Builder;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
#Component
public class CBASparkprocessor {
#Value("${cbapath.path}")
private String cbaPath ;
public void readcbadata() {
System.out.println("calling spark method ............");
SparkSession spark = SparkSession.builder().appName("CBA and
Postillion").
master("local")
// .config("spark.sql.warehouse.dir", "file:///C:/temp")
.getOrCreate() ;
System.out.println("called spark method ............");
Dataset<Row> df = spark.read().format("com.crealytics.spark.excel")
// .option("sheetName","Export Worksheet")
.option("header", "true") // Required
.option("useHeader","true")
.option("inferSchema","true")
.option("dateFormat", "yy-mmm-d")
.option("treatEmptyValuesAsNulls", "true")
.option("addColorColumns", "false")
// .option("ignoreLeadingWhiteSpace", "true")
// .option("ignoreTrailingWhiteSpace", "true")
.option("maxRowsInMey", 20)
.load(cbaPath+"/atm report 17-dec-2020.xlsx") ;
System.out.println("created df ............");
df.printSchema();
System.out.println(df.columns()[0]) ;
System.out.println(df.col("TILL ACCT_NAME")) ;
df.show(5);
System.out.println(df.tail(3)) ;
System.out.println(df.count());
Properties prop = new Properties() ;
prop.setProperty("driver", "org.postgresql.Driver") ;
prop.setProperty("user","postgres") ;
prop.setProperty("password","oracle") ;
//jdbc
df.write().mode(SaveMode.Overwrite)
.jdbc("jdbc:postgresql://localhost:5432/postgres", "cbadata",prop);
System.out.println("success");
spark.close();
return ;
}
}
This is the full log
2021-01-04 03:56:25.109 INFO 5392 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 2021 (http) with context path ''
2021-01-04 03:56:25.123 INFO 5392 --- [ restartedMain] com.aet.AetProcessorApplication : Started AetProcessorApplication in 4.763 seconds (JVM running for 6.004)
spark in spring .....
calling spark method ............
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/C:/Users/DELL/.m2/repository/org/apache/spark/spark-unsafe_2.12/3.0.1/spark-unsafe_2.12-3.0.1.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2021-01-04 03:56:25.748 INFO 5392 --- [ restartedMain] org.apache.spark.SparkContext : Running Spark version 3.0.1
2021-01-04 03:56:26.027 WARN 5392 --- [ restartedMain] org.apache.hadoop.util.NativeCodeLoader : Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2021-01-04 03:56:26.134 INFO 5392 --- [ restartedMain] org.apache.spark.resource.ResourceUtils : ==============================================================
2021-01-04 03:56:26.136 INFO 5392 --- [ restartedMain] org.apache.spark.resource.ResourceUtils : Resources for spark.driver:
2021-01-04 03:56:26.136 INFO 5392 --- [ restartedMain] org.apache.spark.resource.ResourceUtils : ==============================================================
2021-01-04 03:56:26.136 INFO 5392 --- [ restartedMain] org.apache.spark.SparkContext : Submitted application: CBA and Postillion
2021-01-04 03:56:26.218 INFO 5392 --- [ restartedMain] org.apache.spark.SecurityManager : Changing view acls to: DELL
2021-01-04 03:56:26.219 INFO 5392 --- [ restartedMain] org.apache.spark.SecurityManager : Changing modify acls to: DELL
2021-01-04 03:56:26.219 INFO 5392 --- [ restartedMain] org.apache.spark.SecurityManager : Changing view acls groups to:
2021-01-04 03:56:26.219 INFO 5392 --- [ restartedMain] org.apache.spark.SecurityManager : Changing modify acls groups to:
2021-01-04 03:56:26.220 INFO 5392 --- [ restartedMain] org.apache.spark.SecurityManager : SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(DELL); groups with view permissions: Set(); users with modify permissions: Set(DELL); groups with modify permissions: Set()
2021-01-04 03:56:26.325 INFO 5392 --- [on(1)-127.0.0.1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2021-01-04 03:56:26.326 INFO 5392 --- [on(1)-127.0.0.1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2021-01-04 03:56:26.327 INFO 5392 --- [on(1)-127.0.0.1] o.s.web.servlet.DispatcherServlet : Completed initialization in 1 ms
2021-01-04 03:56:26.962 INFO 5392 --- [ restartedMain] org.apache.spark.util.Utils : Successfully started service 'sparkDriver' on port 59292.
2021-01-04 03:56:26.990 INFO 5392 --- [ restartedMain] org.apache.spark.SparkEnv : Registering MapOutputTracker
2021-01-04 03:56:27.025 INFO 5392 --- [ restartedMain] org.apache.spark.SparkEnv : Registering BlockManagerMaster
2021-01-04 03:56:27.048 INFO 5392 --- [ restartedMain] o.a.s.s.BlockManagerMasterEndpoint : Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
2021-01-04 03:56:27.048 INFO 5392 --- [ restartedMain] o.a.s.s.BlockManagerMasterEndpoint : BlockManagerMasterEndpoint up
2021-01-04 03:56:27.052 INFO 5392 --- [ restartedMain] org.apache.spark.SparkEnv : Registering BlockManagerMasterHeartbeat
2021-01-04 03:56:27.067 INFO 5392 --- [ restartedMain] o.apache.spark.storage.DiskBlockManager : Created local directory at C:\Users\DELL\AppData\Local\Temp\blockmgr-045af8d1-c0b5-4529-bcbf-4df2bba990c9
2021-01-04 03:56:27.097 INFO 5392 --- [ restartedMain] o.a.spark.storage.memory.MemoryStore : MemoryStore started with capacity 3.4 GiB
2021-01-04 03:56:27.114 INFO 5392 --- [ restartedMain] org.apache.spark.SparkEnv : Registering OutputCommitCoordinator
2021-01-04 03:56:27.210 INFO 5392 --- [ restartedMain] org.sparkproject.jetty.util.log : Logging initialized #8091ms to org.sparkproject.jetty.util.log.Slf4jLog
2021-01-04 03:56:27.271 INFO 5392 --- [ restartedMain] org.sparkproject.jetty.server.Server : jetty-9.4.z-SNAPSHOT; built: 2019-04-29T20:42:08.989Z; git: e1bc35120a6617ee3df052294e433f3a25ce7097; jvm 11.0.9+7-LTS
2021-01-04 03:56:27.291 INFO 5392 --- [ restartedMain] org.sparkproject.jetty.server.Server : Started #8171ms
2021-01-04 03:56:27.323 INFO 5392 --- [ restartedMain] o.s.jetty.server.AbstractConnector : Started ServerConnector#47322531{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2021-01-04 03:56:27.324 INFO 5392 --- [ restartedMain] org.apache.spark.util.Utils : Successfully started service 'SparkUI' on port 4040.
2021-01-04 03:56:27.344 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#6e590c26{/jobs,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.346 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#78ce03b0{/jobs/json,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.346 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#9ec6bc5{/jobs/job,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.347 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#2d8c3532{/jobs/job/json,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.347 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#7f909636{/stages,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.348 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#522e0079{/stages/json,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.348 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#3fefd0a{/stages/stage,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.349 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#75e224f9{/stages/stage/json,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.350 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#146082e3{/stages/pool,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.350 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#2f74968c{/stages/pool/json,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.351 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#7d814329{/storage,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.351 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#104ea5ce{/storage/json,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.352 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#3f0007d7{/storage/rdd,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.352 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#6db187ce{/storage/rdd/json,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.353 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#6684dd15{/environment,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.354 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#2ee931e4{/environment/json,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.354 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#1d09a68f{/executors,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.355 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#ffbf151{/executors/json,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.355 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#2f617739{/executors/threadDump,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.357 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#43db9cfa{/executors/threadDump/json,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.364 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#7c6c7bcf{/static,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.365 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#55eadd2a{/,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.366 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#67de2d25{/api,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.366 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#5b2023f3{/jobs/job/kill,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.367 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#2b56d9a2{/stages/stage/kill,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.368 INFO 5392 --- [ restartedMain] org.apache.spark.ui.SparkUI : Bound SparkUI to 0.0.0.0, and started at http://DESKTOP-TVLS5UO:4040
2021-01-04 03:56:27.530 INFO 5392 --- [ restartedMain] org.apache.spark.executor.Executor : Starting executor ID driver on host DESKTOP-TVLS5UO
2021-01-04 03:56:27.557 INFO 5392 --- [ restartedMain] org.apache.spark.util.Utils : Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 59309.
2021-01-04 03:56:27.557 INFO 5392 --- [ restartedMain] o.a.s.n.netty.NettyBlockTransferService : Server created on DESKTOP-TVLS5UO:59309
2021-01-04 03:56:27.559 INFO 5392 --- [ restartedMain] org.apache.spark.storage.BlockManager : Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
2021-01-04 03:56:27.567 INFO 5392 --- [ restartedMain] o.a.spark.storage.BlockManagerMaster : Registering BlockManager BlockManagerId(driver, DESKTOP-TVLS5UO, 59309, None)
2021-01-04 03:56:27.571 INFO 5392 --- [ckManagerMaster] o.a.s.s.BlockManagerMasterEndpoint : Registering block manager DESKTOP-TVLS5UO:59309 with 3.4 GiB RAM, BlockManagerId(driver, DESKTOP-TVLS5UO, 59309, None)
2021-01-04 03:56:27.575 INFO 5392 --- [ restartedMain] o.a.spark.storage.BlockManagerMaster : Registered BlockManager BlockManagerId(driver, DESKTOP-TVLS5UO, 59309, None)
2021-01-04 03:56:27.576 INFO 5392 --- [ restartedMain] org.apache.spark.storage.BlockManager : Initialized BlockManager: BlockManagerId(driver, DESKTOP-TVLS5UO, 59309, None)
2021-01-04 03:56:27.590 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#4620dc6d{/metrics/json,null,AVAILABLE,#Spark}
called spark method ............
2021-01-04 03:56:27.881 WARN 5392 --- [ restartedMain] o.apache.spark.sql.internal.SharedState : URL.setURLStreamHandlerFactory failed to set FsUrlStreamHandlerFactory
2021-01-04 03:56:27.882 INFO 5392 --- [ restartedMain] o.apache.spark.sql.internal.SharedState : Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/C:/Users/DELL/eclipse-workspace/AETProcessor/spark-warehouse').
2021-01-04 03:56:27.882 INFO 5392 --- [ restartedMain] o.apache.spark.sql.internal.SharedState : Warehouse path is 'file:/C:/Users/DELL/eclipse-workspace/AETProcessor/spark-warehouse'.
2021-01-04 03:56:27.897 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#748956c{/SQL,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.897 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#3830a54{/SQL/json,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.898 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#759bddde{/SQL/execution,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.899 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#4d4d1316{/SQL/execution/json,null,AVAILABLE,#Spark}
2021-01-04 03:56:27.900 INFO 5392 --- [ restartedMain] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler#5d1115f9{/static/sql,null,AVAILABLE,#Spark}
2021-01-04 03:56:37.788 WARN 5392 --- [tor-heartbeater] o.a.spark.executor.ProcfsMetricsGetter : Exception when trying to compute pagesize, as a result reporting of ProcessTree metrics is stopped
2021-01-04 03:56:41.699 INFO 5392 --- [ restartedMain] org.apache.spark.SparkContext : Starting job: aggregate at InferSchema.scala:33
2021-01-04 03:56:41.713 INFO 5392 --- [uler-event-loop] org.apache.spark.scheduler.DAGScheduler : Got job 0 (aggregate at InferSchema.scala:33) with 1 output partitions
2021-01-04 03:56:41.714 INFO 5392 --- [uler-event-loop] org.apache.spark.scheduler.DAGScheduler : Final stage: ResultStage 0 (aggregate at InferSchema.scala:33)
2021-01-04 03:56:41.714 INFO 5392 --- [uler-event-loop] org.apache.spark.scheduler.DAGScheduler : Parents of final stage: List()
2021-01-04 03:56:41.715 INFO 5392 --- [uler-event-loop] org.apache.spark.scheduler.DAGScheduler : Missing parents: List()
2021-01-04 03:56:41.720 INFO 5392 --- [uler-event-loop] org.apache.spark.scheduler.DAGScheduler : Submitting ResultStage 0 (ParallelCollectionRDD[0] at parallelize at ExcelRelation.scala:98), which has no missing parents
2021-01-04 03:56:41.790 INFO 5392 --- [uler-event-loop] o.a.spark.storage.memory.MemoryStore : Block broadcast_0 stored as values in memory (estimated size 2.6 KiB, free 3.4 GiB)
2021-01-04 03:56:41.850 INFO 5392 --- [uler-event-loop] o.a.spark.storage.memory.MemoryStore : Block broadcast_0_piece0 stored as bytes in memory (estimated size 1497.0 B, free 3.4 GiB)
2021-01-04 03:56:41.853 INFO 5392 --- [ckManagerMaster] o.apache.spark.storage.BlockManagerInfo : Added broadcast_0_piece0 in memory on DESKTOP-TVLS5UO:59309 (size: 1497.0 B, free: 3.4 GiB)
2021-01-04 03:56:41.855 INFO 5392 --- [uler-event-loop] org.apache.spark.SparkContext : Created broadcast 0 from broadcast at DAGScheduler.scala:1223
2021-01-04 03:56:41.873 INFO 5392 --- [uler-event-loop] org.apache.spark.scheduler.DAGScheduler : Submitting 1 missing tasks from ResultStage 0 (ParallelCollectionRDD[0] at parallelize at ExcelRelation.scala:98) (first 15 tasks are for partitions Vector(0))
2021-01-04 03:56:41.874 INFO 5392 --- [uler-event-loop] o.a.spark.scheduler.TaskSchedulerImpl : Adding task set 0.0 with 1 tasks
2021-01-04 03:56:41.966 INFO 5392 --- [er-event-loop-0] o.apache.spark.scheduler.TaskSetManager : Starting task 0.0 in stage 0.0 (TID 0, DESKTOP-TVLS5UO, executor driver, partition 0, PROCESS_LOCAL, 8733 bytes)
2021-01-04 03:56:41.977 INFO 5392 --- [rker for task 0] org.apache.spark.executor.Executor : Running task 0.0 in stage 0.0 (TID 0)
2021-01-04 03:56:42.088 INFO 5392 --- [rker for task 0] org.apache.spark.executor.Executor : Finished task 0.0 in stage 0.0 (TID 0). 1016 bytes result sent to driver
2021-01-04 03:56:42.095 INFO 5392 --- [result-getter-0] o.apache.spark.scheduler.TaskSetManager : Finished task 0.0 in stage 0.0 (TID 0) in 173 ms on DESKTOP-TVLS5UO (executor driver) (1/1)
2021-01-04 03:56:42.097 INFO 5392 --- [result-getter-0] o.a.spark.scheduler.TaskSchedulerImpl : Removed TaskSet 0.0, whose tasks have all completed, from pool
2021-01-04 03:56:42.102 INFO 5392 --- [uler-event-loop] org.apache.spark.scheduler.DAGScheduler : ResultStage 0 (aggregate at InferSchema.scala:33) finished in 0.367 s
2021-01-04 03:56:42.109 INFO 5392 --- [uler-event-loop] org.apache.spark.scheduler.DAGScheduler : Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
2021-01-04 03:56:42.109 INFO 5392 --- [uler-event-loop] o.a.spark.scheduler.TaskSchedulerImpl : Killing all running tasks in stage 0: Stage finished
2021-01-04 03:56:42.112 INFO 5392 --- [ restartedMain] org.apache.spark.scheduler.DAGScheduler : Job 0 finished: aggregate at InferSchema.scala:33, took 0.412763 s
2021-01-04 03:56:42.711 INFO 5392 --- [ckManagerMaster] o.apache.spark.storage.BlockManagerInfo : Removed broadcast_0_piece0 on DESKTOP-TVLS5UO:59309 in memory (size: 1497.0 B, free: 3.4 GiB)
created df ............
root
|-- HTD_TRAN_DATE: string (nullable = true)
|-- tran_Amt: double (nullable = true)
|-- TILL ACCT_NAME: string (nullable = true)
|-- TRAN_ID: string (nullable = true)
|-- REF_NUM: string (nullable = true)
|-- TILL ACCT NUM: string (nullable = true)
|-- tranRmk: string (nullable = true)
|-- HTD_VALUE_DATE: string (nullable = true)
|-- HTD_PSTD_USER_ID: string (nullable = true)
|-- GAM_SOL_ID: string (nullable = true)
|-- STAN: string (nullable = true)
|-- retrieval_number: string (nullable = true)
HTD_TRAN_DATE
TILL ACCT_NAME
As you can see , the spark data-frame is created and I can print out the schema but cannot print out the data when I call show method.I have used df.count() , nothing is printed out and even jdbc method to save data to database .it is not just working. Please, has anybody experience it this. Kindly help.
I have a dockerized springboot application, if I run the image on my machine (ubuntu) all works fine with docker network mode default, as soon I run the image on an enterprise server (VPS) with the default, it hangs on startup and stays there forever.
2020-03-24 08:26:47.590 INFO 1 --- [ main] org.hibernate.Version : HHH000412: Hibernate Core {5.4.10.Final}
2020-03-24 08:26:47.821 INFO 1 --- [ main] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {5.1.0.Final}
2020-03-24 08:26:48.015 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2020-03-24 08:26:48.125 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed.
2020-03-24 08:26:48.151 INFO 1 --- [ main] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.dialect.PostgreSQL95Dialect
or in debug
2020-03-24 08:19:43.856 DEBUG 1 --- [onnection adder] o.p.core.v3.ConnectionFactoryImpl : Send Buffer Size is 43,520
2020-03-24 08:19:43.935 DEBUG 1 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection#7b248da4
2020-03-24 08:19:43.935 DEBUG 1 --- [onnection adder] org.postgresql.Driver : Connecting with URL: jdbc:postgresql://157.46.186.128:62013/db
2020-03-24 08:19:43.935 DEBUG 1 --- [onnection adder] org.postgresql.jdbc.PgConnection : PostgreSQL JDBC Driver 42.2.9
2020-03-24 08:19:43.935 DEBUG 1 --- [onnection adder] org.postgresql.jdbc.PgConnection : setDefaultFetchSize = 0
2020-03-24 08:19:43.935 DEBUG 1 --- [onnection adder] org.postgresql.jdbc.PgConnection : setPrepareThreshold = 5
2020-03-24 08:19:43.935 DEBUG 1 --- [onnection adder] o.p.core.v3.ConnectionFactoryImpl : Trying to establish a protocol version 3 connection to 160.46.186.128:62013
2020-03-24 08:19:43.936 DEBUG 1 --- [onnection adder] o.p.core.v3.ConnectionFactoryImpl : Receive Buffer Size is 186,240
2020-03-24 08:19:43.937 DEBUG 1 --- [onnection adder] o.p.core.v3.ConnectionFactoryImpl : Send Buffer Size is 43,520
2020-03-24 08:19:44.007 DEBUG 1 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection#205272d5
2020-03-24 08:19:44.008 DEBUG 1 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - After adding stats (total=10, active=1, idle=9, waiting=0)
2020-03-24 08:20:10.592 DEBUG 1 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats (total=10, active=1, idle=9, waiting=0)
2020-03-24 08:20:39.272 DEBUG 1 --- [alina-utility-2] org.apache.catalina.session.ManagerBase : Start expire sessions StandardManager at 1585038039271 sessioncount 0
2020-03-24 08:20:39.272 DEBUG 1 --- [alina-utility-2] org.apache.catalina.session.ManagerBase : End expire sessions StandardManager processingTime 1 expired sessions: 0
2020-03-24 08:20:40.592 DEBUG 1 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats (total=10, active=1, idle=9, waiting=0)
2020-03-24 08:21:10.593 DEBUG 1 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats (total=10, active=1, idle=9, waiting=0)
2020-03-24 08:21:39.274 DEBUG 1 --- [alina-utility-2] org.apache.catalina.session.ManagerBase : Start expire sessions StandardManager at 1585038099274 sessioncount 0
2020-03-24 08:21:39.274 DEBUG 1 --- [alina-utility-2] org.apache.catalina.session.ManagerBase : End expire sessions StandardManager processingTime 0 expired sessions: 0
2020-03-24 08:21:40.593 DEBUG 1 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats (total=10, active=1, idle=9, waiting=0)
2020-03-24 08:22:10.594 DEBUG 1 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats (total=10, active=1, idle=9, waiting=0)
2020-03-24 08:22:39.277 DEBUG 1 --- [alina-utility-1] org.apache.catalina.session.ManagerBase : Start expire sessions StandardManager at 1585038159277 sessioncount 0
2020-03-24 08:22:39.277 DEBUG 1 --- [alina-utility-1] org.apache.catalina.session.ManagerBase : End expire sessions StandardManager processingTime 0 expired sessions: 0
2020-03-24 08:22:40.594 DEBUG 1 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats (total=10, active=1, idle=9, waiting=0)
2020-03-24 08:23:10.595 DEBUG 1 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats (total=10, active=1, idle=9, waiting=0)
I have tried to configure my custom network on docker level, use ip, hostnames and no luck.
has anyone experienced same behavior?
Spring Config:
spring:
flyway:
url: jdbc:postgresql://157.46.186.128:62013/db
user: user
password: xxxxx
locations: classpath:db/migration
schemas: db1
datasource:
db:
jdbcUrl: "jdbc:postgresql://157.46.186.128:62013/db"
username: "user"
password: "xxxxx"
driver-class-name: "org.postgresql.Driver"
dialect: "org.hibernate.dialect.PostgreSQL95Dialect"
ci:
jdbcUrl: "jdbc:postgresql://157.46.186.128:62013/db"
username: "user"
password: "xxxxx"
driver-class-name: "org.postgresql.Driver"
dialect: "org.hibernate.dialect.PostgreSQL95Dialect"
schema: ci
docker network settings:
with host mode
"Bridge": "",
"SandboxID": "49fa5d5e812c816a84efe92156e825151b2dc75a36bb68c399b943e06c26a6f7",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/default",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"host": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "d6ecd673f06e15560a16dc5bb38ea4be99e751bfb729d819ab76c50320836443",
"EndpointID": "087b1a6fed2cd5b4de46c6700a9ef7e79de6fc3e58a0ecfdd74a8a851429eaa6",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"DriverOpts": null
}
}
}
and without
"NetworkSettings": {
"Bridge": "",
"SandboxID": "cf8d68a1b1c61d454ed45eb48f4007591d07414f2147410c1381d60e12b93445",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"7979/tcp": null,
"8080/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8080"
}
]
},
"SandboxKey": "/var/run/docker/netns/cf8d68a1b1c6",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "49993412f56f7bd29edd55091bd77bba66a40ae566daf8111f891be9aa21d4ce",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:02",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "bf5b2dfe32339838a40b351409fb7750f632a3810cb8774db195f49eed8a3ed5",
"EndpointID": "49993412f56f7bd29edd55091bd77bba66a40ae566daf8111f891be9aa21d4ce",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
}
}
}
I had to set docker daemon MTU to be the same as the network card.
ip link show
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> **mtu 1450** qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
edited /etc/docker/daemon.json
"mtu": 1450
}
Problem solved. Tricky one, when I did remote debugging I notice that it was blocking on socket read, from there google was my friend.
https://mlohr.com/docker-mtu/
These are the current dependencies included in my web application.
dependencies {
implementation("org.springframework.boot:spring-boot-starter-data-jpa")
implementation("org.springframework.boot:spring-boot-starter-data-rest")
implementation("org.springframework.boot:spring-boot-starter-web")
implementation("org.springframework.boot:spring-boot-starter-web-services")
implementation("com.fasterxml.jackson.module:jackson-module-kotlin")
implementation("org.jetbrains.kotlin:kotlin-reflect")
implementation("org.jetbrains.kotlin:kotlin-stdlib-jdk8")
implementation("com.google.firebase:firebase-admin:6.10.0")
}
Every time I run the application I get the following error:
2019-09-30 13:32:38.385 INFO 17104 --- [ restartedMain] unito.taas.project.ProjectApplicationKt : Starting ProjectApplicationKt on LAPTOP-K1DHEJQ6 with PID 17104 (C:\Users\beppe\Desktop\project\TAAS_project\project\build\classes\kotlin\main started by beppe in C:\Users\beppe\Desktop\project\TAAS_project)
2019-09-30 13:32:38.388 INFO 17104 --- [ restartedMain] unito.taas.project.ProjectApplicationKt : No active profile set, falling back to default profiles: default
2019-09-30 13:32:38.444 INFO 17104 --- [ restartedMain] .e.DevToolsPropertyDefaultsPostProcessor : Devtools property defaults active! Set 'spring.devtools.add-properties' to 'false' to disable
2019-09-30 13:32:39.177 INFO 17104 --- [ restartedMain] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data repositories in DEFAULT mode.
2019-09-30 13:32:39.263 INFO 17104 --- [ restartedMain] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 77ms. Found 3 repository interfaces.
2019-09-30 13:32:39.785 INFO 17104 --- [ restartedMain] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2019-09-30 13:32:39.976 INFO 17104 --- [ restartedMain] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed.
2019-09-30 13:32:40.045 INFO 17104 --- [ restartedMain] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo [
name: default
...]
2019-09-30 13:32:40.127 INFO 17104 --- [ restartedMain] org.hibernate.Version : HHH000412: Hibernate Core {5.3.11.Final}
2019-09-30 13:32:40.128 INFO 17104 --- [ restartedMain] org.hibernate.cfg.Environment : HHH000206: hibernate.properties not found
2019-09-30 13:32:40.317 INFO 17104 --- [ restartedMain] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {5.0.4.Final}
2019-09-30 13:32:40.734 INFO 17104 --- [ restartedMain] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.dialect.H2Dialect
2019-09-30 13:32:41.419 INFO 17104 --- [ restartedMain] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default'
2019-09-30 13:32:41.439 INFO 17104 --- [ restartedMain] o.s.b.d.a.OptionalLiveReloadServer : LiveReload server is running on port 35729
2019-09-30 13:32:43.049 INFO 17104 --- [ restartedMain] unito.taas.project.ProjectApplicationKt : Started ProjectApplicationKt in 5.053 seconds (JVM running for 5.806)
2019-09-30 13:32:43.063 INFO 17104 --- [ Thread-9] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
2019-09-30 13:32:43.068 INFO 17104 --- [ Thread-9] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
2019-09-30 13:32:43.073 INFO 17104 --- [ Thread-9] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
How can I avoid the application closing?
EDIT:
The following is my SpringBoot annotated class
package unito.taas.project
import org.springframework.boot.autoconfigure.SpringBootApplication
import org.springframework.boot.runApplication
#SpringBootApplication
class ProjectApplication
fun main(args: Array<String>) {
runApplication<ProjectApplication>(*args)
}
When I use the command java -jar mypro-0.0.1-SNAPSHOT.jar --logging.level.root=TRACE ,the springboot failed ,but some computer could works,
the springboot version is 2.0.0.RELEASE,
<group>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</exclusion>
the log as followers:
2018-07-07 18:02:27.487 DEBUG 21534 --- [ main] o.e.j.u.DecoratedObjectFactory : Creating Instance: class org.eclipse.jetty.websocket.jsr356.encoders.LongEncoder
2018-07-07 18:02:27.487 DEBUG 21534 --- [ main] o.e.j.u.DecoratedObjectFactory : Creating Instance: class org.eclipse.jetty.websocket.jsr356.encoders.ShortEncoder
2018-07-07 18:02:27.487 DEBUG 21534 --- [ main] o.e.j.u.DecoratedObjectFactory : Creating Instance: class org.eclipse.jetty.websocket.jsr356.encoders.ByteBufferEncoder
2018-07-07 18:02:27.487 DEBUG 21534 --- [ main] o.e.j.u.DecoratedObjectFactory : Creating Instance: class org.eclipse.jetty.websocket.jsr356.encoders.ByteArrayEncoder
2018-07-07 18:02:27.487 DEBUG 21534 --- [ main] o.e.j.s.h.AbstractHandler : starting org.springframework.boot.web.embedded.jetty.JettyEmbeddedErrorHandler#7d0b7e3c
2018-07-07 18:02:27.488 INFO 21534 --- [ main] o.e.j.s.h.ContextHandler : Started o.s.b.w.e.j.JettyEmbeddedWebAppContext#4310d43{/,[file:///tmp/jetty-docbase.8016560841686360205.13000/],AVAILABLE}
2018-07-07 18:02:27.488 DEBUG 21534 --- [ main] o.e.j.w.WebAppContext : postConfigure o.s.b.w.e.j.JettyEmbeddedWebAppContext#4310d43{/,[file:///tmp/jetty-docbase.8016560841686360205.13000/],AVAILABLE} with org.springframework.boot.web.embedded.jetty.ServletContextInitializerConfiguration#5824a83d
2018-07-07 18:02:27.488 DEBUG 21534 --- [ main] o.e.j.w.WebAppContext : postConfigure o.s.b.w.e.j.JettyEmbeddedWebAppContext#4310d43{/,[file:///tmp/jetty-docbase.8016560841686360205.13000/],AVAILABLE} with org.springframework.boot.autoconfigure.websocket.servlet.JettyWebSocketServletWebServerCustomizer$1#537f60bf
2018-07-07 18:02:27.488 DEBUG 21534 --- [ main] o.e.j.w.WebAppContext : postConfigure o.s.b.w.e.j.JettyEmbeddedWebAppContext#4310d43{/,[file:///tmp/jetty-docbase.8016560841686360205.13000/],AVAILABLE} with org.springframework.boot.web.embedded.jetty.JettyServletWebServerFactory$1#5677323c
2018-07-07 18:02:27.488 DEBUG 21534 --- [ main] o.e.j.w.WebAppContext : postConfigure o.s.b.w.e.j.JettyEmbeddedWebAppContext#4310d43{/,[file:///tmp/jetty-docbase.8016560841686360205.13000/],AVAILABLE} with org.springframework.boot.web.embedded.jetty.JettyServletWebServerFactory$2#18df8434
2018-07-07 18:02:27.488 DEBUG 21534 --- [ main] o.e.j.s.h.AbstractHandler : starting org.eclipse.jetty.server.handler.ErrorHandler#a38c7fe
2018-07-07 18:02:27.489 INFO 21534 --- [ main] o.e.j.s.Server : Started #4487ms
2018-07-07 18:02:27.512 DEBUG 21534 --- [ main] o.e.j.s.Server : doStop org.eclipse.jetty.server.Server#2dc54ad4[9.4.8.v20171121]
2018-07-07 18:02:27.513 DEBUG 21534 --- [ main] o.e.j.s.Server : Graceful shutdown org.eclipse.jetty.server.Server#2dc54ad4[9.4.8.v20171121] by
2018-07-07 18:02:27.514 DEBUG 21534 --- [ main] o.e.j.s.h.AbstractHandler : stopping org.eclipse.jetty.server.Server#2dc54ad4[9.4.8.v20171121]
2018-07-07 18:02:27.514 INFO 21534 --- [ main] o.e.j.s.session : Stopped scavenging
2018-07-07 18:02:27.514 DEBUG 21534 --- [ main] o.e.j.s.h.AbstractHandler : stopping org.eclipse.jetty.server.handler.ErrorHandler#a38c7fe
2018-07-07 18:02:27.514 DEBUG 21534 --- [ main] o.e.j.s.h.AbstractHandler : stopping o.s.b.w.e.j.JettyEmbeddedWebAppContext#4310d43{/,[file:///tmp/jetty-docbase.8016560841686360205.13000/],UNAVAILABLE}
2018-07-07 18:02:27.514 DEBUG 21534 --- [ main] o.e.j.s.h.AbstractHandler : stopping org.springframework.boot.web.embedded.jetty.JettyEmbeddedErrorHandler#7d0b7e3c
2018-07-07 18:02:27.515 DEBUG 21534 --- [ main] o.e.j.s.h.AbstractHandler : stopping org.eclipse.jetty.server.session.SessionHandler1420232606==dftMaxIdleSec=1800
2018-07-07 18:02:27.515 DEBUG 21534 --- [ main] o.e.j.s.h.AbstractHandler : stopping org.eclipse.jetty.security.ConstraintSecurityHandler#26e356f0
2018-07-07 18:02:27.515 DEBUG 21534 --- [ main] o.e.j.s.h.AbstractHandler : stopping org.springframework.boot.web.embedded.jetty.JettyEmbeddedWebAppContext$JettyEmbeddedServletHandler#4b8ee4de
2018-07-07 18:02:27.517 INFO 21534 --- [ main] o.e.j.s.h.ContextHandler : Stopped o.s.b.w.e.j.JettyEmbeddedWebAppContext#4310d43{/,[file:///tmp/jetty-docbase.8016560841686360205.13000/],UNAVAILABLE}
Something called Server.stop() which is performing a graceful stop of the server.
The logging line with ...
o.e.j.s.Server : doStop org.eclipse.jetty.server.Server#2dc54ad4[9.4.8.v20171121]
... is telling you this.
You can add a breakpoint within Server.doStop() (the code that handles the .stop() command from the LifeCycle) and see where that call came from.