My application uses Eureka and Ribbon. I'm trying to get two microservices to talk to each other. Below is my method of concern.
#Autowired #LoadBalanced
private RestTemplate client;
#Autowired
private DiscoveryClient dClient;
public String getServices() {
List<String> services = dClient.getServices();
List<ServiceInstance> serviceInstances = new ArrayList<>();
List<String> serviceHosts = new ArrayList<>();
for(String service : services) {
serviceInstances.addAll(dClient.getInstances(service));
}
for(ServiceInstance service : serviceInstances) {
serviceHosts.add(service.getHost());
}
//throws No instances available exception here
try {
System.out.println(this.client.getForObject("http://MY-MICROSERVICE/rest/hello", String.class, new HashMap<String, String>()));
}
catch(Exception e) {
e.printStackTrace();
}
return serviceHosts.toString();
}
The method returns an array of two hostnames(IP). So DiscoveryClient is able to see instances of the two services registered with Eureka. But RestTemplate or more precisely Ribbon throws IllegalStateExcpetion: No instances available exception.
DynamicServerListLoadBalancer for client MY-MICROSERVICE initialized: DynamicServerListLoadBalancer:{NFLoadBalancer:name=MY-MICROSERVICE,current list of Servers=[],Load balancer stats=Zone stats: {},Server stats: []}ServerList:org.springframework.cloud.netflix.ribbon.eureka.DomainExtractingServerList#23edc38f
java.lang.IllegalStateException: No instances available for MY-MICROSERVICE
at org.springframework.cloud.netflix.ribbon.RibbonLoadBalancerClient.execute(RibbonLoadBalancerClient.java:119)
at org.springframework.cloud.netflix.ribbon.RibbonLoadBalancerClient.execute(RibbonLoadBalancerClient.java:99)
at org.springframework.cloud.client.loadbalancer.LoadBalancerInterceptor.intercept(LoadBalancerInterceptor.java:58)
Even the Eureka dashboard shows two services registered. I feel the problem is specifically with Ribbon. Here's my config file.
spring.application.name="my-microservice"
logging.level.org.springframework.boot.autoconfigure.logging=INFO
spring.devtools.restart.enabled=true
spring.devtools.add-properties=true
server.ribbon.eureka.enabled=true
eureka.client.serviceUrl.defaultZone = http://localhost:8761/eureka/
The other microservice also has the same configs except for a different name. What's the problem here?
Solved. I was using application.yml with Eureka-server and application.properties with the client. Once I converted everything to yml, all works fine.
spring:
application:
name: "my-microservice"
devtools:
restart:
enabled: true
add-properties: true
logging:
level:
org.springframework.boot.autoconfigure.logging: INFO
eureka:
client:
serviceUrl:
defaultZone: "http://localhost:8761/eureka/"
This is the yml file for both apps which only differ by the application name.
Related
While trying to use listener config properties in application.yml, I am facing an issue where the KafkaListener annotated method is not invoked at all if I use the application.yml config(listener.type= batch). It only gets invoked when I explicitly set setBatchListener to true in code. Here is my code and configuration.
Consumer code:
#KafkaListener(containerFactory = "kafkaListenerContainerFactory",
topics = "${spring.kafka.template.default-topic}",
groupId = "${spring.kafka.consumer.group-id}")
public void receive(List<ConsumerRecord<String,byte[]>> consumerRecords,Acknowledgment acknowledgment){
processor.process(consumerRecords,acknowledgment);
}
application.yml:
listener:
missing-topics-fatal: false
type: batch
ack-mode: manual
Consumer configuration:
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(
new DefaultKafkaConsumerFactory<>(kafkaProperties.buildConsumerProperties()));
factory.setErrorHandler(new SeekToCurrentErrorHandler( new UpdateMessageErrorHandler(),new FixedBackOff(idleEventInterval,maxFailures)));
final ContainerProperties properties = factory.getContainerProperties();
properties.setIdleBetweenPolls(idleBetweenPolls);
properties.setIdleEventInterval(idleEventInterval);
return factory;
}
If I'm not mistaken, by using the ConcurrentKafkaListenerContainerFactory builder in your configuration you're essentially overriding a piece of code that is usually executed within ConcurrentKafkaListenerContainerFactoryConfigurer class within spring autoconfiguration package:
if (properties.getType().equals(Type.BATCH)) {
factory.setBatchListener(true);
factory.setBatchErrorHandler(this.batchErrorHandler);
} else {
factory.setErrorHandler(this.errorHandler);
}
Since it's hardcoded in your application.yaml file anyway, why is it a bad thing for it to be configured in your #Configuration file?
I have got a requirement to read all the spring.cloud.stream.kafka.binder from a cloud configuration service. I was able to move the bindings to separate configuration class by defining a #BindingServiceProperties bean. Also was able to move one Kafka system properties to a configuration class by defining a #KafkaBinderConfigurationProperties. If both the Kafka server properties are moved in as binder configuration, the application is unable to identify the binder and respective bindings. Need help in defining multiple binders in a configuration class and attach each binders to respective bindings.
application.yml
spring:
cloud:
stream:
defaultBinder: kafka1
binders:
kafka1:
type: kafka
environment:
spring.cloud.stream.kafka.binder:
brokers: localhost:9095
autoCreateTopics: true
configuration:
auto.offset.reset: latest
kafka2:
type: kafka
environment:
spring.cloud.stream.kafka.binder:
brokers: localhost:9096
autoCreateTopics: true
configuration:
auto.offset.reset: latest
Using the application.yml configuration I'm able to attach the binder to respective bindings.
BinderConfiguration
#Primary
#Bean
public BindingServiceProperties bindingServiceProperties() {
BindingServiceProperties bindingServiceProperties = new BindingServiceProperties();
BindingProperties input1BindingProps = getInput1BindingProperties();
BindingProperties input2BindingProps = getInput2BindingProperties();
Map<String, BindingProperties> bindingProperties = new HashMap<>();
bindingProperties.put(StreamBindings.INPUT_1, input1BindingProps);
bindingProperties.put(StreamBindings.INPUT_2, input2BindingProps);
bindingServiceProperties.setBindings(bindingProperties);
return bindingServiceProperties;
}
private BindingProperties getInput1BindingProperties() {
ConsumerProperties consumerProperties = new ConsumerProperties();
consumerProperties.setMaxAttempts(1);
consumerProperties.setDefaultRetryable(false);
BindingProperties props = new BindingProperties();
props.setDestination("test1");
props.setContentType(MediaType.APPLICATION_JSON_VALUE);
props.setGroup("test-group");
props.setConsumer(consumerProperties);
props.setBinder("kafka1");
return props;
}
private BindingProperties getInput2BindingProperties() {
ConsumerProperties consumerProperties = new ConsumerProperties();
consumerProperties.setMaxAttempts(1);
consumerProperties.setDefaultRetryable(false);
BindingProperties props = new BindingProperties();
props.setDestination("test2");
props.setContentType(MediaType.APPLICATION_JSON_VALUE);
props.setGroup("test-group");
props.setConsumer(consumerProperties);
props.setBinder("kafka2");
return props;
}
KafkaBinderConfigurationProperties
#Bean
public KafkaBinderConfigurationProperties getKafka1BinderProps(){
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties
= new KafkaBinderConfigurationProperties(new KafkaProperties());
String brokers = "localhost:9095";
kafkaBinderConfigurationProperties.setBrokers(brokers);
kafkaBinderConfigurationProperties.setDefaultBrokerPort("9095");
Map<String, String> configuration = new HashMap<>();
configuration.put("auto.offset.reset", "latest");
kafkaBinderConfigurationProperties.setConfiguration(configuration);
return kafkaBinderConfigurationProperties;
}
#Bean
public KafkaBinderConfigurationProperties getKafka2BinderProps(){
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties
= new KafkaBinderConfigurationProperties(new KafkaProperties());
String brokers = "localhost:9096";
kafkaBinderConfigurationProperties.setBrokers(brokers);
kafkaBinderConfigurationProperties.setDefaultBrokerPort("9096");
Map<String, String> configuration = new HashMap<>();
configuration.put("auto.offset.reset", "latest");
kafkaBinderConfigurationProperties.setConfiguration(configuration);
return kafkaBinderConfigurationProperties;
}
Application is working fine with the properties defined in application.yml without moving binder configuration into separate configuration class.
Complete code can be found in this repository.
Any help is appreciated.
Currently working on swarmifying our Springboot microservice back-end with eureka service discoverer. The first problem was making sure the service discoverer doesn't pick de ingress IP-adress but instead IP-address from the overlay network. After some searching I found a post that suggest the following Eureka Client Configuration:
#Configuration
#EnableConfigurationProperties
public class EurekaClientConfig {
private ConfigurableEnvironment env;
public EurekaClientConfig(final ConfigurableEnvironment env) {
this.env = env;
}
#Bean
#Primary
public EurekaInstanceConfigBean eurekaInstanceConfigBean(final InetUtils inetUtils) throws IOException {
final String hostName = System.getenv("HOSTNAME");
String hostAddress = null;
final Enumeration<NetworkInterface> networkInterfaces = NetworkInterface.getNetworkInterfaces();
for (NetworkInterface netInt : Collections.list(networkInterfaces)) {
for (InetAddress inetAddress : Collections.list(netInt.getInetAddresses())) {
if (hostName.equals(inetAddress.getHostName())) {
hostAddress = inetAddress.getHostAddress();
System.out.printf("Inet used: %s", netInt.getName());
}
System.out.printf("Inet %s: %s / %s\n", netInt.getName(), inetAddress.getHostName(), inetAddress.getHostAddress());
}
}
if (hostAddress == null) {
throw new UnknownHostException("Cannot find ip address for hostname: " + hostName);
}
final int nonSecurePort = Integer.valueOf(env.getProperty("server.port", env.getProperty("port", "8080")));
final EurekaInstanceConfigBean instance = new EurekaInstanceConfigBean(inetUtils);
instance.setHostname(hostName);
instance.setIpAddress(hostAddress);
instance.setNonSecurePort(nonSecurePort);
System.out.println(instance);
return instance;
}
}
After deploying the new discoverer I got the correct result and the service discoverer had the correct overlay IP-address.
In order to understand the next step here is some information about the environment we run this docker swarm on. We currently have 2 droplets one for development and the other for production. Currently we are only working on the development server to Swarmify it. The production hasn't been touched in months.
The next step is to deploy a Discovery Client Springboot application that will connect to the correct service discoverer and also has the overly IP-address instead of the ingress. But when I build the application it always connects to our production service discoverer outside the docker swarm into the other droplet. I can see the application being deployed on the swarm but looking at the Eureka dashboard from the production server I can see that it connects to it.
The second problem is that the application also has the EurekaClient config you see above but it is ignored. Even the logs within the method is not called when starting up the applicaiton.
Here is the configuration from the Discovery Client application:
eureka:
client:
serviceUrl:
defaultZone: service-discovery_service:8761/eureka
enabled: false
instance:
instance-id: ${spring.application.name}:${random.value}
prefer-ip-address: true
spring:
application:
name: account-service
I assume that you can use defaultZone to point at the correct service discoverer but I can be wrong.
Just dont use an eureka service discoverer but something else like treafik. Much easier solution.
In a microservice architecture I have three services. The first one creates a message in a kafka queue with spring cloud stream. In this service I use spring cloud contract to generate a contract. The second service is a spring cloud stub runner boot service that reads the contracts of the first service and exposes them to the third service. The third service does smoke test against the stub runner service using the endpont /triiggers/{label}. I understand that when I call to /triggers/{label} the service stub runner should send the message created in the service contract to the kafka queue, but never it send it to the queue. How can I do the stub runner service send the message of the contract to the kafka queue?.
Thanks
Code:
Service 1
Contract:
org.springframework.cloud.contract.spec.Contract.make {
description 'Register event: Customer registered'
label 'CustomerRegistered'
input {
// the contract will be triggered by a method
triggeredBy('registerEvent()')
}
// output message of the contract
outputMessage {
// destination to which the output message will be sent
sentTo 'ClassCustomerEvent'
// the body of the output message
body('''{"id":1,"eventType":"CustomerRegistered","entity": {"clientId":1,"clientName":"David, Suarez, Pascual","classCalendarId":1,"classCalendarName":"Aula 1 - Aerobic","classCalendarDayId":7}}''')
headers {
header('contentType', applicationJson())
}
}
}
Service 2:
application.yml:
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost
zkNodes: localhost
default-binder: kafka
stubrunner:
cloud:
stubbed:
discovery:
enabled: false
stubrunner:
stubsMode: LOCAL
ids:
- com.elipcero.classcustomerschool:classcustomer-school:1.0.0:stubs:8762
Main:
#SpringBootApplication
#EnableStubRunnerServer
#EnableBinding
#AutoConfigureStubRunner
public class ClassCustomerStubrunnerSchoolApplication {
public static void main(String[] args) {
SpringApplication.run(ClassCustomerStubrunnerSchoolApplication.class, args);
}
}
Service 3
SmokeTest:
#Test
public void should_calculate_client_total_by_classrooom_and_set_class_by_client() {
mongoOperations.dropCollection("CustomerClass");
mongoOperations.dropCollection("ClassCustomerDayTotal");
String url = this.stubRunnerUrl + "/triggers/CustomerRegistered";
log.info("Mongo collections deletes");
log.info("Url stub runner boot: " + url);
ResponseEntity<Map> response = this.restTemplate.postForEntity(url, "", Map.class);
then(response.getStatusCode().is2xxSuccessful()).isTrue();
log.info("Triggered customer event");
await().until( () ->
customerClassRepository
.findById(1)
.map((c) -> c.getClasses().isEmpty())
.orElse(false)
);
}
Sink:
#Service
#EnableBinding(ClassCustomerConsumer.class)
#RequiredArgsConstructor
public class ClassCustomerEvent {
public static final String CONST_EVENT_CUSTOMER_REGISTERED = "CustomerRegistered";
public static final String CONST_EVENT_CUSTOMER_UNREGISTERED = "CustomerUnregistered";
#NonNull private ClassCustomerTotalView classCustomerTotalView;
#NonNull private CustomerClassView customerClassView;
#StreamListener(ClassCustomerConsumer.INPUT)
public void ConsumeClassCustomerEvent(EventMessage<ClassCustomer> eventMessage) {
classCustomerTotalView.calculate(eventMessage);
customerClassView.selectOrUnSelected(eventMessage);
}
}
Fixed in master for the next version after of spring-cloud-contract-v2.1.0.M2
Issue reference: https://github.com/spring-cloud/spring-cloud-contract/pull/805
I have been attempting to get an inbound SubscribableChannel and outbound MessageChannel working in my spring boot application.
I have successfully setup the kafka channel and tested it successfully.
Furthermore I have create a basic spring boot application that tests adding and receiving things from the channel.
The issue I am having is when I put the equivalent code in the application it belongs in, it appears that the messages never get sent or received. By debugging it's hard to ascertain what's going on but the only thing that looks different to me is the channel-name. In the working impl the channel name is like application.channel in the non working app its localhost:8080/channel.
I was wondering if there is some spring boot configuration blocking or altering the creation of the channels into a different channel source?
Anyone had any similar issues?
application.yml
spring:
datasource:
url: jdbc:h2:mem:dpemail;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
platform: h2
username: hello
password:
driverClassName: org.h2.Driver
jpa:
properties:
hibernate:
show_sql: true
use_sql_comments: true
format_sql: true
cloud:
stream:
kafka:
binder:
brokers: localhost:9092
bindings:
email-in:
destination: email
contentType: application/json
email-out:
destination: email
contentType: application/json
Email
public class Email {
private long timestamp;
private String message;
public long getTimestamp() {
return timestamp;
}
public void setTimestamp(long timestamp) {
this.timestamp = timestamp;
}
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
}
Binding Config
#EnableBinding(EmailQueues.class)
public class EmailQueueConfiguration {
}
Interface
public interface EmailQueues {
String INPUT = "email-in";
String OUTPUT = "email-out";
#Input(INPUT)
SubscribableChannel inboundEmails();
#Output(OUTPUT)
MessageChannel outboundEmails();
}
Controller
#RestController
#RequestMapping("/queue")
public class EmailQueueController {
private EmailQueues emailQueues;
#Autowired
public EmailQueueController(EmailQueues emailQueues) {
this.emailQueues = emailQueues;
}
#RequestMapping(value = "sendEmail", method = POST)
#ResponseStatus(ACCEPTED)
public void sendToQueue() {
MessageChannel messageChannel = emailQueues.outboundEmails();
Email email = new Email();
email.setMessage("hello world: " + System.currentTimeMillis());
email.setTimestamp(System.currentTimeMillis());
messageChannel.send(MessageBuilder.withPayload(email).setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.APPLICATION_JSON).build());
}
#StreamListener(EmailQueues.INPUT)
public void handleEmail(#Payload Email email) {
System.out.println("received: " + email.getMessage());
}
}
I'm not sure if one of the inherited configuration projects using Spring-Cloud, Spring-Cloud-Sleuth might be preventing it from working, but even when I remove it still doesnt. But unlike my application that does work with the above code I never see the ConsumeConfig being configured, eg:
o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 100
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.id = consumer-2
connections.max.idle.ms = 540000
enable.auto.commit = false
exclude.internal.topics = true
(This configuration is what I see in my basic Spring Boot application when running the above code and the code works writing and reading from the kafka channel)....
I assume there is some over spring boot configuration from one of the libraries I'm using creating a different type of channel I just cannot find what that configuration is.
What you posted contains a lot of unrelated configuration, so hard to determine if anything gets in the way. Also, when you say "..it appears that the messages never get sent or received.." are there any exceptions in the logs? Also, please state the version of Kafka you're using as well as Spring Cloud Stream.
Now, I did try to reproduce it based on your code (after cleaning up a bit to only leave relevant parts) and was able to successfully send/receive.
My Kafka version is 0.11 and Spring Cloud Stream 2.0.0.
Here is the relevant code:
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost:9092
bindings:
email-in:
destination: email
email-out:
destination: email
#SpringBootApplication
#EnableBinding(KafkaQuestionSoApplication.EmailQueues.class)
public class KafkaQuestionSoApplication {
public static void main(String[] args) {
SpringApplication.run(KafkaQuestionSoApplication.class, args);
}
#Bean
public ApplicationRunner runner(EmailQueues emailQueues) {
return new ApplicationRunner() {
#Override
public void run(ApplicationArguments args) throws Exception {
emailQueues.outboundEmails().send(new GenericMessage<String>("Hello"));
}
};
}
#StreamListener(EmailQueues.INPUT)
public void handleEmail(String payload) {
System.out.println("received: " + payload);
}
public interface EmailQueues {
String INPUT = "email-in";
String OUTPUT = "email-out";
#Input(INPUT)
SubscribableChannel inboundEmails();
#Output(OUTPUT)
MessageChannel outboundEmails();
}
}
Okay so after a lot of debugging... I discovered that something is creating a Test Support Binder (how don't know yet) so obviously this is used to not impact add messages to a real channel.
After adding
#SpringBootApplication(exclude = TestSupportBinderAutoConfiguration.class)
The kafka channel configurations have worked and messages are adding.. would be interesting to know what on earth is setting up this test support binder.. I'll find that sucker eventually.