Scheduled tasks is runnings two times in tomcat production - spring

I have onde app spring-boot v2 with one class Scheduled, in developer enviroment my method run in class Scheduled run one times.
But i build for production (using mvn clean package) and publishing in tomcat 8
my task is executing two times
This is my class #Scheduled
#Service
#EnableScheduling
public class SchedulerEmailService {
#Autowired
private SenderEmailService senderEmailService;
private static final Logger LOG = LoggerFactory.getLogger(SchedulerEmailService.class);
#Autowired
private TaskService taskService;
#Scheduled(fixedDelay = 10000)
public void run() {
LOG.info("Status do Servico: " + taskService.isEnabled());
if(taskService.isEnabled()) {
LOG.info("Executando... {}", LocalDateTime.now());
senderEmailService.enviarEmail();
}else {
LOG.info("Falsa execução do servico... {}", LocalDateTime.now());
}
}
}
In production this is log
O servico foi parado
Status do Servico: false
Falsa execução do servico... 2018-08-21T15:26:59.663
Status do Servico: true
Executando... 2018-08-21T15:27:01.183
Status do Servico: false
Falsa execução do servico... 2018-08-21T15:27:09.664
Status do Servico: true
Executando... 2018-08-21T15:27:11.368
See in log that her run one times for false and other for true.
Obs: I define that variable taskService.isRunning for false
but in exists other task in executing of value default true
Edit
I print in log the hash code class and this is a result:
2018-08-22 07:49:45.996 INFO 8168 --- [pool-19-thread-1] c.c.s.services.SchedulerEmailService : Status do Servico: true
2018-08-22 07:49:45.996 INFO 8168 --- [pool-19-thread-1] c.c.s.services.SchedulerEmailService : HashCode Classe: 19875385
2018-08-22 07:49:45.996 INFO 8168 --- [pool-19-thread-1] c.c.s.services.SchedulerEmailService : Executando... 2018-08-22T07:49:45.996
2018-08-22 07:49:50.730 INFO 8168 --- [pool-20-thread-1] c.c.s.services.SchedulerEmailService : Status do Servico: true
2018-08-22 07:49:50.730 INFO 8168 --- [pool-20-thread-1] c.c.s.services.SchedulerEmailService : HashCode Classe: 11898713
2018-08-22 07:49:50.731 INFO 8168 --- [pool-20-thread-1] c.c.s.services.SchedulerEmailService : Executando... 2018-08-22T07:49:50.731
2018-08-22 07:49:56.121 INFO 8168 --- [pool-19-thread-1] c.c.s.services.SchedulerEmailService : Status do Servico: true
2018-08-22 07:49:56.121 INFO 8168 --- [pool-19-thread-1] c.c.s.services.SchedulerEmailService : HashCode Classe: 19875385
2018-08-22 07:49:56.121 INFO 8168 --- [pool-19-thread-1] c.c.s.services.SchedulerEmailService : Executando... 2018-08-22T07:49:56.121
Exists two diferent class in execution.
What resolve this ?

From the hashCode in the log you can see that the SchedulerEmailService gets instantiated twice by Spring.
I'm not 100% sure about the reason but the #EnableScheduling annotation is not meant to be used on a bean class but on a #Configuration class
Enables Spring's scheduled task execution capability, similar to functionality found in Spring's XML namespace. To be used on #Configuration classes as follows:
https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/scheduling/annotation/EnableScheduling.html
I would suggest removing the #EnableScheduling annotation from SchedulerEmailService and adding a config like with a ComponentScan including the package of SchedulerEmailService.
#Service
public class SchedulerEmailService {
// ...
#Scheduled(fixedDelay = 10000)
public void run() {
// ...
}
}
#Configuration
#EnableScheduling
#ComponentScan(basePackages="your.package")
public class AppConfig {
}

Related

caching with apache ignite with spring boot using spring data

I am trying to do caching with apache ignite 2.7.5 in spring boot 2. Here application starts and ignite node is up but the caching is not working (i dont have any errors in console).
Here is my config file
#Bean
public Ignite igniteInstance() {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName("springDataNode");
cfg.setPeerClassLoadingEnabled(true);
CacheConfiguration studcfg = new CacheConfiguration("studentCache");
CacheConfiguration coursecfg = new CacheConfiguration("courseCache");
studcfg.setIndexedTypes(Long.class,Student.class);
coursecfg.setIndexedTypes(Long.class,Course.class);
cfg.setCacheConfiguration(new CacheConfiguration[] {studcfg,coursecfg});
return Ignition.start(cfg);
}
*************************************************************************
here is my repositories
#RepositoryConfig(cacheName = "studentCache")
public interface StudentRepository extends IgniteRepository<Student,Long> {
List<Student> getStudentByName(String name);
Student getStudentById (Long id);
}
#RepositoryConfig(cacheName = "courseCache")
public interface CourseRepository extends IgniteRepository<Course, Long> {
List<Course> getAllCourseByName (String name);
#Query("SELECT id FROM Breed WHERE id = ?")
List<Long> getById (Long id, Pageable pageable);
}
here is the caching part
public class CacheApp {
private static AnnotationConfigApplicationContext ctx;
#Autowired
private static StudentRepository studentRepository;
#Autowired
private static CourseRepository courseRepository;
public static void main(String[] args)
{
System.out.println( "Spring Data Example!" );
ctx = new AnnotationConfigApplicationContext();
ctx.register(SpringConfig.class);
ctx.refresh();
studentRepository= ctx.getBean(StudentRepository.class);
courseRepository= ctx.getBean(CourseRepository.class);
Student stud1= new Student();
stud1.setId(111);
stud1.setName("ram");
studentRepository.save(stud1);
List<Student> getAllBreeds = studentRepository.getStudentByName("ram");
for(Student student : getAllBreeds){
System.out.println("student:" + student);
}
Course course1= new Course();
course1.setId(1);
course1.setName("maths");
courseRepository.save(course1);
List<Course> courses = courseRepository.getAllCourseByName("maths");
for(Course cor : courses){
System.out.println("courses:"+ cor);
}
}
}
here is my console log
01:32:56] Ignite node started OK (id=2eb50680, instance name=springDataNode)
2020-04-25 01:32:56.427 INFO 5056 --- [ main] o.a.i.i.IgniteKernal%springDataNode : Data Regions Configured:
2020-04-25 01:32:56.427 INFO 5056 --- [ main] o.a.i.i.IgniteKernal%springDataNode : ^-- default [initSize=256.0 MiB, maxSize=799.3 MiB, persistence=false]
2020-04-25 01:32:56.428 INFO 5056 --- [ main] o.a.i.i.IgniteKernal%springDataNode :
>>> +----------------------------------------------------------------------+
>>> Ignite ver. 2.7.5#20190603-sha1:be4f2a158bcf79a52ab4f372a6576f20c4f86954
>>> +----------------------------------------------------------------------+
..........................................................
[01:32:56] Topology snapshot [ver=1, locNode=2eb50680, servers=1, clients=0, state=ACTIVE, CPUs=4, offheap=0.78GB, heap=0.87GB]
2020-04-25 01:32:56.431 INFO 5056 --- [ main] o.a.i.i.m.d.GridDiscoveryManager : Topology snapshot [ver=1, locNode=2eb50680, servers=1, clients=0, state=ACTIVE, CPUs=4, offheap=0.78GB, heap=0.87GB]
2020-04-25 01:32:56.547 INFO 5056 --- [ main] com.example.demo.IgniteTestApplication : Started IgniteTestApplication in 4.761 seconds (JVM running for 5.566)
hgjjhjhhhjhjhjjjhjhjjh************************************************************************************************************************************************************
You can see that caching part was not working.As I am new to this technology I am unable to figure out want has went wrong here.Can anyone please help?
You need to use #EnableIgniteRepositories To enable Apache Ignite backed repositories in Spring Data.
see: https://apacheignite-mix.readme.io/docs/spring-data#spring-data-and-apache-ignite-configuration
Take a look at: https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/springdata/SpringDataExample.java

Is there any way to know the load time for a java application?

I am trying to compare performance of SpringBoot and Micronaut.
I have some applications implemented with both frameworks, and can get some info about JVM with Micrometer, but the information about the time each of these frameworks need to load from scratch and start working is something I am missing.
Is there any way to get it?
Thanks.
Spring boot logs startup time in format:
Started {applicationName} in {time} seconds (JVM running for {jvmTime})
e.g.
2019-05-18 20:50:07.099 INFO 6904 --- [ main] c.e.demo.DemoApplication : Started DemoApplication in 2.156 seconds (JVM running for 3.164)
If you want to have access to startup time programmatically in your application you can JVM running time on ApplicationStartedEvent:
#Component
public class StartupListener {
#EventListener
public void onStartup(ApplicationStartedEvent event) {
double startupTime = ManagementFactory.getRuntimeMXBean().getUptime() / 1000.0;
System.out.println("Application started in: " + startupTime);
}
}
Just to complete the answer with the Micronaut part:
#Singleton
#Requires(notEnv = Environment.TEST)
#Slf4j
public class InitialEventListener implements ApplicationEventListener<ServiceStartedEvent> {
#Getter
private long currentTimeMillis;
#Async
#Override
public void onApplicationEvent(ServiceStartedEvent event) {
currentTimeMillis = System.currentTimeMillis();
log.info("ServiceStartedEvent at " + currentTimeMillis + ":" + event);
}
}

EmbeddedKafka AdminClient shuts down before Spring app starts for tests

I'm trying to write integration tests for a Spring Kafka app (Spring Boot 2.0.6, Spring Kafka 2.1.10) and am seeing lots of instance of INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x166e432ebec0001 type:create cxid:0x5e zxid:0x24 txntype:-1 reqpath:n/a Error Path:/brokers/topics/my-topic/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/my-topic/partitions and various flavors of the path (/brokers, /brokers/topics, etc.) that show in the logs before the Spring app starts. The AdminClient then shuts down and this message is logged:
DEBUG org.apache.kafka.common.network.Selector - [SocketServer brokerId=0] Connection with /127.0.0.1 disconnected
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:124)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:235)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:196)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:547)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:483)
at org.apache.kafka.common.network.Selector.poll(Selector.java:412)
at kafka.network.Processor.poll(SocketServer.scala:575)
at kafka.network.Processor.run(SocketServer.scala:492)
at java.lang.Thread.run(Thread.java:748)
I'm using the #ClassRule startup option in the test like so:
#ClassRule
#Shared
private KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, 'my-topic')
, autowiring a KafkaTemplate, and setting the Spring properties for the connection based on the embedded Kafka values:
def setupSpec() {
System.setProperty('spring.kafka.bootstrap-servers', embeddedKafka.getBrokersAsString());
System.setProperty('spring.cloud.stream.kafka.binder.zkNodes', embeddedKafka.getZookeeperConnectionString());
}
Once the Spring app starts, I can see again instance of the user-level KeeperException messages: o.a.z.server.PrepRequestProcessor : Got user-level KeeperException when processing sessionid:0x166e445836d0001 type:setData cxid:0x6b zxid:0x2b txntype:-1 reqpath:n/a Error Path:/config/topics/__consumer_offsets Error:KeeperErrorCode = NoNode for /config/topics/__consumer_offsets.
Any idea where I'm going wrong here? I can provide other setup information and log messages but just took an educated guess on what may be most helpful initially.
I'm not familiar with Spock, but what I know that #KafkaListener method is invoked on its own thread, therefore you can't just assert it in the then: block directly.
You need to ensure somehow a blocking wait in your test-case.
I tried with the BlockingVariable against the real service not mock and I see in logs your println(message). But that BlockingVariable still doesn't work for me somehow:
#DirtiesContext
#SpringBootTest(classes = [KafkaIntTestApplication.class])
#ActiveProfiles('test')
class CustomListenerSpec extends Specification {
#ClassRule
#Shared
public KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, false, 'my-topic')
#Autowired
private KafkaTemplate<String, String> template
#SpyBean
private SimpleService service
final def TOPIC_NAME = 'my-topic'
def setupSpec() {
System.setProperty('spring.kafka.bootstrapServers', embeddedKafka.getBrokersAsString());
}
def 'Sample test'() {
given:
def testMessagePayload = "Test message"
def message = MessageBuilder.withPayload(testMessagePayload).setHeader(KafkaHeaders.TOPIC, TOPIC_NAME).build()
def result = new BlockingVariable<Boolean>(5)
service.handleMessage(_) >> {
result.set(true)
}
when: 'We put a message on the topic'
template.send(message)
then: 'the service should be called'
result.get()
}
}
And logs are like this:
2018-11-05 13:38:51.089 INFO 8888 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [my-topic-0, my-topic-1]
Test message
BlockingVariable.get() timed out after 5,00 seconds
at spock.util.concurrent.BlockingVariable.get(BlockingVariable.java:113)
at com.example.CustomListenerSpec.Sample test(CustomListenerSpec.groovy:54)
2018-11-05 13:38:55.917 INFO 8888 --- [ main] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#11ebb1b6: startup date [Mon Nov 05 13:38:49 EST 2018]; root of context hierarchy
Also I had to add this dependency:
testImplementation "org.hamcrest:hamcrest-core"
UPDATE
OK. There real problem that MockConfig was not visible for the test context configuration and that #Import(MockConfig.class) does the trick. Where #Primary also gives us additional signal what bean to pick up for the injection in the test class.
#ArtemBilan's response set me on the right path so thanks to him for chiming in, and I was able to figure it out after looking into other BlockingVariable articles and examples. I used BlockingVariable in a mock's response instead of as a callback. When the mock's response is invoked, make it set the value to true, and the then block just does result.get() and the test passes.
#DirtiesContext
#ActiveProfiles('test')
#SpringBootTest
#Import(MockConfig.class)
class CustomListenerSpec extends TestSpecBase {
#ClassRule
#Shared
private KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, false, TOPIC_NAME)
#Autowired
private KafkaTemplate<String, String> template
#Autowired
private SimpleService service
final def TOPIC_NAME = 'my-topic'
def setupSpec() {
System.setProperty('spring.kafka.bootstrap-servers', embeddedKafka.getBrokersAsString());
}
def 'Sample test'() {
def testMessagePayload = "Test message"
def message = MessageBuilder.withPayload(testMessagePayload).setHeader(KafkaHeaders.TOPIC, TOPIC_NAME).build()
def result = new BlockingVariable<Boolean>(5)
service.handleMessage(_ as String) >> {
result.set(true)
}
when: 'We put a message on the topic'
template.send(message)
then: 'the service should be called'
result.get()
}
}

Method must be called outside of transactional context - Spring #Transactional

I hope you're well.
I would like to find the best way to ensure that a service method is called outside of a transaction. It would be as follows:
Lets say that we have a method in the form of:
#Transactional
public void insertEntity(Entity entity){
persistence.save(entity);
}
Now, lets say that we are invoking this method, but we need to be sure that is not called inside code that is transactional already. Following would be wrong:
#Transactional
public void enclosingTransaction() {
//Perform long process transaction
service.insertEntity(entity);
}
What is the best option to make our method "insertEntity" aware that is being called inside a running transaction and throw error?
Thanks!
You could invoke TransactionAspectSupport.currentTransactionStatus().isNewTransaction() method in order to know if the current transaction is new (i.e. it was not propagated from another #Transactional method) or not:
#Transactional
public void insertEntity(Entity entity){
if (!TransactionAspectSupport.currentTransactionStatus().isNewTransaction()) {
throw new IllegalStateException("Transaction is not new!");
}
persistence.save(entity);
}
The static method TransactionAspectSupport.currentTransactionStatus() returns a TransactionStatus object which represents the transaction status of the current method invocation.
I wrote a minimal Spring MVC webapp to test your scenario (I'm omitting configuration classes and files, as well as import and packages declarations):
TestController.java
#RestController
public class TestController {
private static final Logger log = LoggerFactory.getLogger(TestController.class);
#Autowired
private ServiceOne serviceOne;
#Autowired
private ServiceTwo serviceTwo;
#GetMapping(path = "/test-transactions")
public String testTransactions() {
log.info("*** TestController.testTransactions() ***");
log.info("* Invoking serviceOne.methodOne()...");
try {
serviceOne.methodOne();
}
catch (IllegalStateException e) {
log.error("* {} invoking serviceOne.methodOne()!", e.getClass().getSimpleName());
}
log.info("* Invoking serviceTwo.methodTwo()...");
try {
serviceTwo.methodTwo();
}
catch (IllegalStateException e) {
log.error("* {} invoking serviceTwo.methodTwo()!", e.getClass().getSimpleName());
}
return "OK";
}
}
ServiceOneImpl.java
#Service
public class ServiceOneImpl implements ServiceOne {
private static final Logger log = LoggerFactory.getLogger(ServiceOneImpl.class);
#Autowired
private ServiceTwo serviceTwo;
#PersistenceContext
private EntityManager em;
#Override
#Transactional(propagation = Propagation.REQUIRED)
public void methodOne() {
log.info("*** ServiceOne.methodOne() ***");
log.info("getCurrentTransactionName={}", TransactionSynchronizationManager.getCurrentTransactionName());
log.info("isNewTransaction={}", TransactionAspectSupport.currentTransactionStatus().isNewTransaction());
log.info("Query result={}", em.createNativeQuery("SELECT 1").getResultList());
log.info("getCurrentTransactionName={}", TransactionSynchronizationManager.getCurrentTransactionName());
log.info("isNewTransaction={}", TransactionAspectSupport.currentTransactionStatus().isNewTransaction());
serviceTwo.methodTwo();
}
}
ServiceTwoImpl.java
#Service
public class ServiceTwoImpl implements ServiceTwo {
private static final Logger log = LoggerFactory.getLogger(ServiceTwoImpl.class);
#PersistenceContext
private EntityManager em;
#Override
#Transactional(propagation = Propagation.REQUIRED)
public void methodTwo() {
log.info("*** ServiceTwo.methodTwo() ***");
log.info("getCurrentTransactionName={}", TransactionSynchronizationManager.getCurrentTransactionName());
log.info("isNewTransaction={}", TransactionAspectSupport.currentTransactionStatus().isNewTransaction());
if (!TransactionAspectSupport.currentTransactionStatus().isNewTransaction()) {
log.warn("Throwing exception because transaction is not new...");
throw new IllegalStateException("Transaction is not new!");
}
log.info("Query result={}", em.createNativeQuery("SELECT 2").getResultList());
log.info("getCurrentTransactionName={}", TransactionSynchronizationManager.getCurrentTransactionName());
log.info("isNewTransaction={}", TransactionAspectSupport.currentTransactionStatus().isNewTransaction());
}
}
And here it is the log of the execution:
INFO test.transactions.web.TestController - *** TestController.testTransactions() ***
INFO test.transactions.web.TestController - * Invoking serviceOne.methodOne()...
INFO test.transactions.service.ServiceOneImpl - *** ServiceOne.methodOne() ***
INFO test.transactions.service.ServiceOneImpl - getCurrentTransactionName=test.transactions.service.ServiceOneImpl.methodOne
INFO test.transactions.service.ServiceOneImpl - isNewTransaction=true
INFO test.transactions.service.ServiceOneImpl - Query result=[1]
INFO test.transactions.service.ServiceOneImpl - getCurrentTransactionName=test.transactions.service.ServiceOneImpl.methodOne
INFO test.transactions.service.ServiceOneImpl - isNewTransaction=true
INFO test.transactions.service.ServiceTwoImpl - *** ServiceTwo.methodTwo() ***
INFO test.transactions.service.ServiceTwoImpl - getCurrentTransactionName=test.transactions.service.ServiceOneImpl.methodOne
INFO test.transactions.service.ServiceTwoImpl - isNewTransaction=false
WARN test.transactions.service.ServiceTwoImpl - Throwing exception because transaction is not new...
ERROR test.transactions.web.TestController - * IllegalStateException invoking serviceOne.methodOne()!
INFO test.transactions.web.TestController - * Invoking serviceTwo.methodTwo()...
INFO test.transactions.service.ServiceTwoImpl - *** ServiceTwo.methodTwo() ***
INFO test.transactions.service.ServiceTwoImpl - getCurrentTransactionName=test.transactions.service.ServiceTwoImpl.methodTwo
INFO test.transactions.service.ServiceTwoImpl - isNewTransaction=true
INFO test.transactions.service.ServiceTwoImpl - Query result=[2]
INFO test.transactions.service.ServiceTwoImpl - getCurrentTransactionName=test.transactions.service.ServiceTwoImpl.methodTwo
INFO test.transactions.service.ServiceTwoImpl - isNewTransaction=true

Spring Cloud: Eureka Client registration/deregistration cycle

To familiarize myself with Spring Cloud's Eureka client/server mechanism, I try to connect a client to the Eureka server and toggle the connection on/off every 5 minutes to see how the Eureka server handles this.
I have two Eureka clients.
The first on is giving me information about the registered applications with this code:
#Autowired
private DiscoveryClient discoveryClient;
#RequestMapping(value = "/services", produces = MediaType.APPLICATION_JSON)
public ResponseEntity<ResourceSupport> applications() {
ResourceSupport resource = new ResourceSupport();
Set<String> regions = discoveryClient.getAllKnownRegions();
for (String region : regions) {
Applications allApps = discoveryClient.getApplicationsForARegion(region);
List<Application> registeredApps = allApps.getRegisteredApplications();
Iterator<Application> it = registeredApps.iterator();
while (it.hasNext()) {
Application app = it.next();
List<InstanceInfo> instancesInfos = app.getInstances();
if (instancesInfos != null && !instancesInfos.isEmpty()) {
//only show one of the instances
InstanceInfo info = instancesInfos.get(0);
resource.add(new Link(info.getHomePageUrl(), "urls"));
}
}
}
return new ResponseEntity<ResourceSupport>(resource, HttpStatus.OK);
}
The second Eureka Client register/deregister itself every 5 minutes:
private static final long EUREKA_INTERVAL = 5 * 60000;
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(MyServiceApplication.class);
long currentTime = System.currentTimeMillis();
long lastToggleTime = System.currentTimeMillis();
boolean connected = true;
while (true) {
if (currentTime - lastToggleTime > EUREKA_INTERVAL) {
if (connected) {
System.err.println("disconnect");
DiscoveryManager.getInstance().shutdownComponent();
connected = false;
lastToggleTime = System.currentTimeMillis();
}
else {
System.err.println("connect");
DiscoveryManager.getInstance().initComponent(
DiscoveryManager.getInstance().getEurekaInstanceConfig(),
DiscoveryManager.getInstance().getEurekaClientConfig());
connected = true;
lastToggleTime = System.currentTimeMillis();
}
}
currentTime = System.currentTimeMillis();
}
}
The log output of the second Eureka client looks as follows:
disconnect
2015-03-26 13:59:23.713 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : DiscoveryClient_MYAPPNAME/MYMEGAHOSTNAME - deregister status: 200
connect
2015-03-26 14:04:23.870 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : Disable delta property : false
2015-03-26 14:04:23.870 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : Single vip registry refresh property : null
2015-03-26 14:04:23.870 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : Force full registry fetch : false
2015-03-26 14:04:23.870 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : Application is null : false
2015-03-26 14:04:23.870 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : Registered Applications size is zero : true
2015-03-26 14:04:23.870 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : Application version is -1: true
2015-03-26 14:04:23.889 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : Getting all instance registry info from the eureka server
2015-03-26 14:04:23.892 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : The response status is 200
2015-03-26 14:04:23.894 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : Starting heartbeat executor: renew interval is: 30
2015-03-26 14:04:53.916 INFO 3452 --- [ool-11-thread-1] com.netflix.discovery.DiscoveryClient : DiscoveryClient_MYAPPNAME/MYMEGAHOSTNAME - Re-registering apps/MYAPPNAME
2015-03-26 14:04:53.916 INFO 3452 --- [ool-11-thread-1] com.netflix.discovery.DiscoveryClient : DiscoveryClient_MYAPPNAME/MYMEGAHOSTNAME: registering service...
2015-03-26 14:04:53.946 INFO 3452 --- [ool-11-thread-1] com.netflix.discovery.DiscoveryClient : DiscoveryClient_MYAPPNAME/MYMEGAHOSTNAME - registration status: 204
When starting both Eureka clients for the first time, this works well. The second client is shown by the first client AND the second client is visible in die Eureka server console.
When the second client disconnects itself from the Eureka server, it is no longer listed there and the first client is also not showing it anymore.
Unfortunately, when the second client reconnects to the Eureka server, the Eureka server console is just showing a big red highlighted "DOWN (1)" and the first client is not showing the second client anymore. What am I missing here?
Solution:
Based on Dave Sayer's answer my solution was to add a custom #Configuration that has the EurekaDiscoveryClientConfiguration autowired and starts a thread for toggling the registration. Note that this is for test purposes only, so it may be a quite ugly solution ;-)
#Configuration
static public class MyDiscoveryClientConfigServiceAutoConfiguration {
#Autowired
private EurekaDiscoveryClientConfiguration lifecycle;
#PostConstruct
public void init() {
new Thread(new Runnable() {
#Override
public void run() {
long currentTime = System.currentTimeMillis();
long lastToggleTime = System.currentTimeMillis();
boolean connected = true;
while (true) {
if (currentTime - lastToggleTime > EUREKA_INTERVAL) {
if (connected) {
System.err.println("disconnect");
lifecycle.stop();
DiscoveryManager.getInstance().getDiscoveryClient().shutdown();
connected = false;
lastToggleTime = System.currentTimeMillis();
}
else {
System.err.println("connect");
DiscoveryManager.getInstance().initComponent(
DiscoveryManager.getInstance().getEurekaInstanceConfig(),
DiscoveryManager.getInstance().getEurekaClientConfig());
lifecycle.start();
connected = true;
lastToggleTime = System.currentTimeMillis();
}
}
currentTime = System.currentTimeMillis();
}
}
}).start();
}
}
Your call to DiscoveryManager.getInstance().initComponent() does not set the status (and the default is DOWN). In Spring Cloud we handle it in a special EurekaDiscoveryClientConfiguration.start() lifecycle. You could inject that and re-use it like this:
#Autowired
private EurekaDiscoveryClientConfiguration lifecycle;
#PostConstruct
public void init() {
this.lifecycle.stop();
if (DiscoveryManager.getInstance().getDiscoveryClient() != null) {
DiscoveryManager.getInstance().getDiscoveryClient().shutdown();
}
ApplicationInfoManager.getInstance().initComponent(this.instanceConfig);
DiscoveryManager.getInstance().initComponent(this.instanceConfig,
this.clientConfig);
this.lifecycle.start();
}
(which is code taken from here: https://github.com/spring-cloud/spring-cloud-netflix/blob/master/spring-cloud-netflix-core/src/main/java/org/springframework/cloud/netflix/config/DiscoveryClientConfigServiceAutoConfiguration.java#L58).

Resources