I am trying to implement Kafka consumer with SSL, provide all the required configurations in the application.yml;
When I start the spring boot Kafka consumer application; Consumer is trying to connect the localhost:9092 instead of mentioned Kafka Brokers.
KafkaConfig.java
#Bean
public ConsumerFactory<String, AvroRecord> consumerFactory() throws IOException {
return new DefaultKafkaConsumerFactory<>(kafkaProps());
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, AvroRecord>>
kafkaListenerContainerFactory() throws IOException {
ConcurrentKafkaListenerContainerFactory<String, AvroRecord> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
kafkaProps() is loading all the SSL and bootstrap servers related properties. Values, I can see it in the debug mode.
application.yml
kafka:
properties:
basic:
auth:
credentials:
source: USER_INFO
user: username
pass: password
enableAutoRegister: true
max_count: 100
max_delay: 5000
schema:
registry:
url: https://schema-registry:8081
ssl:
truststore:
location: <<location>>
password: pwd
keystore:
location: <<location>>
password: pwd
key:
password: pwd
ssl:
enabled: true
protocols: TLSv1.2,TLSv1.1,TLSv1
truststore:
type: JKS
location: <<location>>
password: pwd
keystore:
type: JKS
location: <<location>>
password: pwd
key:
password: pwd
security:
protocol: SSL
consumer:
bootstrap-servers: broker1:9092,broker2:9092
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
max-message-size: 10241024
In the application logs, I am getting the below log
18:46:33.964 [main] INFO o.a.k.c.a.AdminClientConfig requestId=
transactionKey= | AdminClientConfig values:
bootstrap.servers = [localhost:9092]
client.dns.lookup = use_all_dns_ips
client.id =
connections.max.idle.ms = 300000
15:53:54.608 [kafka-admin-client-thread | adminclient-1] WARN o.a.k.c.NetworkClient requestId=
transactionKey= | [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
I am not able to find it, why it is connecting to localhost instead of mentioned brokers
The correct property is spring.kafka.bootstrap-servers. You appear to be missing the spring prefix completely. Also, schema.registry.url, ssl.truststore, etc are all considered singular property keys (strings) to Kafka clients, so (to my knowledge) therefore should not be "nested" in YAML objects
You only tried to set the bootstrap property on the consumer, not the AdminClient
Your client will always connect to advertised.listeners of the broker after making the initial connection to the bootstrap server string, so if that is localhost:9092, would explain the AdminClient log output
Related
I am trying to start Spring-Kafka with Spring Boot 2.1.7.RELEASE on localhost with Java 12.
Getting the Error:
"org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-1, groupId=inter] Connection to node -1 could not be established. Broker may not be available."
I tried switching the Java Version to 11 and 8 and various Properties
spring:
kafka:
consumer:
#bootstrap-servers: localhost:9092
group-id: inter
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
producer:
bootstrap-servers: localhost:9092
#Service
public class KafkaHalloWorldMessagingService {
private KafkaTemplate<String, String> kafkaTemplate;
#Autowired
public KafkaHalloWorldMessagingService(KafkaTemplate<String, String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
public void sendHalloToTheSystem(String messageToSend) {
kafkaTemplate.send("interlinked.hallo.topic", messageToSend);
}
}
#Component
public class KafkaHalloWorldListener {
#KafkaListener(topics = "interlinked.hallo.topics", groupId = "inter")
public void handle(String messageToListenTo) {
System.out.println(messageToListenTo.toUpperCase());
}
}
2019-08-22 16:25:20.580 WARN 5865 --- [ restartedMain] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-1, groupId=inter] Connection to node -1 could not be established. Broker may not be available.
Make sure the bootstrap server value in the yml file and the listener in the Kafka server.properties file is same.
Update these two values in the server.properties file.
It can be seen in the config folder of Kafka download directory.
zookeeper.connect=Your IpV4 addrees:2181
listeners=PLAINTEXT://Your IpV4 addrees:9092
eg:zookeeper.connect=10.147.2.161:2181
And why is the consumer's boot strap server property commented out?
Please use the producer's boot strap server value for consumer too.
spring.kafka.bootstrap-servers = Your IpV4 addrees:9092
Or split
producer:
bootstrap-servers: =Your IpV4 addrees:9092
consumer:
bootstrap-servers: =Your IpV4 addrees:9092:
Hope your zookeeper and kafka is up.
I was trying to connect kafka aws instance through local Spring Boot API.
I am able to connect it but while listening to the topic, it's throwing an below exception but the new topics were created successfully by Spring Boot API
I am unable to publish any message as well.
java.io.IOException: Can't resolve address: ip-xxx-xx-xx-xx.ec2.internal:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235) ~[kafka-clients-2.0.1.jar:na]
Caused by: java.nio.channels.UnresolvedAddressException: null
at sun.nio.ch.Net.checkAddress(Net.java:101) ~[na:1.8.0_192]
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622) ~[na:1.8.0_192]
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233) ~[kafka-clients-2.0.1.jar:na]
... 30 common frames omitted
2019-07-17 15:36:13.581 WARN 3709 --- [ main] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-1, groupId=group_id] Error connecting to node ip-172-31-80-50.ec2.internal:9092 (id: 0 rack: null)
I allowed this port as well
Custom TCP Rule
TCP
2181
0.0.0.0/0
Custom TCP Rule
TCP
9092
0.0.0.0/0
server:
port: 8081
spring:
kafka:
consumer:
bootstrap-servers: xx.xx.xx.xx:9092
group-id: group_id
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
producer:
bootstrap-servers: xx.xx.xx.xx:9092
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
#KafkaListener(topics = "ConsumerTest", groupId = "group_id")
public void consume(String message) throws IOException {
logger.info(String.format("#### -> Consumed message -> %s", message));
}
java.io.IOException: Can't resolve address: ip-xxx-xx-xx-xx.ec2.internal:9092
Error connecting to node ip-172-31-80-50.ec2.internal:9092
When consumers connect to the broker they get back the metadata of the broker for the partition from which they're reading data. What your client is getting back here is the advertised.listener of the Kafka broker. So whilst you connect to the broker on the public address of the broker, it returns to your client the internal address of the machine.
To fix this, you need to set up your listeners correctly on your brokers. See https://rmoff.net/2018/08/02/kafka-listeners-explained/ for details.
Virtualhost are not getting created on RabbitMQ server based on configuration
Do i have make sure VH aka Virtual Hosts on RabbitMQ .
Am i missing some configuration.
Please find the configuration below
application.yml
spring:
rabbitmq:
host: 127.0.0.1
virtual-host: /defaultVH
username: defaultUser
password: defaultPassword
cloud:
stream:
bindings:
saviyntSampleQueueA:
binder: rabbit-A
contentType: application/x-java-object
group: groupA
destination: saviyntSampleQueueA
saviyntSampleQueueB:
binder: rabbit-B
contentType: application/x-java-object
group: groupB
destination: saviyntSampleQueueB
binders:
rabbit-A:
defaultCandidate: false
inheritEnvironment: false
type: rabbit
environment:
spring:
rabbitmq:
host: 127.0.0.1
virtualHost: /vhA
username: userA
password: paswdA
port: 5672
connection-timeout: 10000
rabbit-B:
defaultCandidate: false
inheritEnvironment: false
type: rabbit
environment:
spring:
rabbitmq:
host: 127.0.0.1
virtualHost: /vhB
username: userB
password: paswdB
port: 5672
connection-timeout: 10000
bootstrap.yml
############################################
# default settings
############################################
spring:
main:
banner-mode: "off"
application:
name: demo-service
cloud:
config:
enabled: true #change this to use config-service
retry:
maxAttempts: 3
discovery:
enabled: false
fail-fast: true
override-system-properties: false
server:
port: 8080
Added default spring boot added Enable binding
#EnableBinding({MessageChannels.class})
#SpringBootApplication
public class Configissue1124Application {
public static void main(String[] args) {
SpringApplication.run(Configissue1124Application.class, args);
}
}
Now simple straightforward massage channel to to dispatch massage
interface MessageChannels {
#Input("saviyntSampleQueueA")
SubscribableChannel queueA();
#Input("saviyntSampleQueueB")
SubscribableChannel queueB();
}
When i ran the Boot Application its not creating any Virtualhost on the system . i tried using config server buy providing same configuration but still no luck
can you please find if some thing am missing.
Thanks in Advance
The AMQP protocol (or RabbitMQ REST API) provides no mechanism to provision virtual hosts from the client.
Virtual hosts must be provisioned manually on the server.
I am trying to integrate my Sprint Boot applications with Keycloak, starting with secure swagger page.
keytool helped me to generate a selfsigned keystore
keytool -genkey -alias abcdef -storetype PKCS12 -keyalg RSA -keysize 2048 -keystore keystore.p12 -validity 3650
I use the above to setup ssl for the app
server:
port: "15700"
ssl:
enabled: true
key-store: classpath:keystore.p12
key-store-password: password
key-alias: abcdef
keyStoreType: PKCS12
Without keycloak, the https for swagger works as expected.
I started keycloak from their docker image as below, export http and https
services:
keycloak:
image: jboss/keycloak
environment:
DB_VENDOR: POSTGRES
DB_ADDR: my.ip.address
DB_PORT: 5432
DB_DATABASE: keycloak
DB_USER: username
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: password
ports:
- 8443:8443
- 8080:8080
I ask user to login first when they want to access the swagger docs, so I configure keycloak as below:
keycloak:
auth-server-url: "https://192.168.1.15:8443/auth"
realm: "DemoRealm"
public-client: true
resource: demo-app
security-constraints[0]:
authRoles[0]: "user"
securityCollections[0]:
name: "Demo App"
patterns[0]: "/swagger-ui.html"
Now, not logged in user will be direct to keycloak login page, it works perfect. But after the successful login, when redirect back to the app's swagger page, I go the following error:
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
If I configure the keycloak auth uri to http
keycloak:
auth-server-url: "http://192.168.1.15:8080/auth"
realm: "DemoRealm"
public-client: true
resource: demo-app
security-constraints[0]:
authRoles[0]: "user"
securityCollections[0]:
name: "Demo App"
patterns[0]: "/swagger-ui.html"
everything works perfectly.
Is this a configuration issue for keycloak or for the spring boot app? Any required steps I missed?
You can try to set up your Rest Template bean:
Add dependency:
implementation 'org.apache.httpcomponents:httpclient:4.5'
Provide RestTemplate bean:
#Bean
private RestTemplate restTemplate() {
SSLContext sslContext = buildSslContext();
SSLConnectionSocketFactory socketFactory = new SSLConnectionSocketFactory(sslContext);
HttpClient httpClient = HttpClients.custom()
.setSSLSocketFactory(socketFactory)
.build();
HttpComponentsClientHttpRequestFactory factory = new HttpComponentsClientHttpRequestFactory(httpClient);
return new RestTemplate(factory);
}
private SSLContext buildSslContext() {
try {
char[] keyStorePassword = sslProperties.getKeyStorePassword();
return new SSLContextBuilder()
.loadKeyMaterial(
KeyStore.getInstance(new File(sslProperties.getKeyStore()), keyStorePassword),
keyStorePassword
).build();
} catch (Exception ex) {
throw new IllegalStateException("Unable to instantiate SSL context", ex);
} finally {
sslProperties.setKeyStorePassword(null);
sslProperties.setTrustStorePassword(null);
}
}
Provide required SSL properties in your application.properties or application.yaml file:
server:
ssl:
enabled: true
key-store: /path/to/key.keystore
key-store-password: password
key-alias: alias
trust-store: /path/to/truststore
trust-store-password: password
Alternatively, you can use my spring boot starter
I'm trying to use Spring Cloud Stream to integrate with Kafka. The message being written is a Java POJO and while it works as expected (the message is being written to the topic and I can read off with a consumer app), there are some unknown characters being added to the start of the message which are causing trouble when trying to integrate Kafka Connect to sink the messages from the topic.
With the default setup this is the message being pushed to Kafka:
contentType "text/plain"originalContentType "application/json;charset=UTF-8"{"payload":{"username":"john"},"metadata":{"eventName":"Login","sessionId":"089acf50-00bd-47c9-8e49-dc800c1daf50","username":"john","hasSent":null,"createDate":1511186145471,"version":null}}
If I configure the Kafka producer within the Java app then the message is written to the topic without the leading characters / headers:
#Configuration
public class KafkaProducerConfig {
#Bean
public ProducerFactory<String, Object> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
"localhost:9092");
configProps.put(
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
configProps.put(
ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
JsonSerializer.class);
return new DefaultKafkaProducerFactory<String, Object>(configProps);
}
#Bean
public KafkaTemplate<String, Object> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}
Message on Kafka:
{"payload":{"username":"john"},"metadata":{"eventName":"Login","sessionId":"089acf50-00bd-47c9-8e49-dc800c1daf50","username":"john","hasSent":null,"createDate":1511186145471}
Since I'm just setting the key/value serializers I would've expected to be able to do this within the application.yml properties file, rather than doing it through the code.
However, when the yml is updated to specify the serializers it's not working as I would expect i.e. it's not generating the same message as the producer configured in Java (above):
spring:
profiles: local
cloud:
stream:
bindings:
session:
destination: session
contentType: application/json
kafka:
binder:
brokers: localhost
zkNodes: localhost
defaultZkPort: 2181
defaultBrokerPort: 9092
bindings:
session:
producer:
configuration:
value:
serializer: org.springframework.kafka.support.serializer.JsonSerializer
key:
serializer: org.apache.kafka.common.serialization.StringSerializer
Message on Kafka:
"/wILY29udGVudFR5cGUAAAAMInRleHQvcGxhaW4iE29yaWdpbmFsQ29udGVudFR5cGUAAAAgImFwcGxpY2F0aW9uL2pzb247Y2hhcnNldD1VVEYtOCJ7InBheWxvYWQiOnsidXNlcm5hbWUiOiJqb2huIn0sIm1ldGFkYXRhIjp7ImV2ZW50TmFtZSI6IkxvZ2luIiwic2Vzc2lvbklkIjoiNGI3YTBiZGEtOWQwZS00Nzg5LTg3NTQtMTQyNDUwYjczMThlIiwidXNlcm5hbWUiOiJqb2huIiwiaGFzU2VudCI6bnVsbCwiY3JlYXRlRGF0ZSI6MTUxMTE4NjI2NDk4OSwidmVyc2lvbiI6bnVsbH19"
Should it be possible to configure this solely through the application yml? Are there additional settings that are missing?
Credit to #Gary for the answer above!
For completeness, the configuration which is now working for me is below.
spring:
profiles: local
cloud:
stream:
bindings:
session:
producer:
useNativeEncoding: true
destination: session
contentType: application/json
kafka:
binder:
brokers: localhost
zkNodes: localhost
defaultZkPort: 2181
defaultBrokerPort: 9092
bindings:
session:
producer:
configuration:
value:
serializer: org.springframework.kafka.support.serializer.JsonSerializer
key:
serializer: org.apache.kafka.common.serialization.StringSerializer
See headerMode and useNativeEncoding in the producer properties (....session.producer.useNativeEncoding).
headerMode
When set to raw, disables header embedding on output. Effective only for messaging middleware that does not support message headers natively and requires header embedding. Useful when producing data for non-Spring Cloud Stream applications.
Default: embeddedHeaders.
useNativeEncoding
When set to true, the outbound message is serialized directly by client library, which must be configured correspondingly (e.g. setting an appropriate Kafka producer value serializer). When this configuration is being used, the outbound message marshalling is not based on the contentType of the binding. When native encoding is used, it is the responsibility of the consumer to use appropriate decoder (ex: Kafka consumer value de-serializer) to deserialize the inbound message. Also, when native encoding/decoding is used the headerMode property is ignored and headers will not be embedded into the message.
Default: false.
Now, spring.kafka.producer.value-serializer property can be used
yml:
spring:
kafka:
producer:
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
properties:
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer