Rabbit mq 2.1.3 version - spring-boot

I am getting below error while consuming message on rabbit mq through fanout approach :
2022-08-24 12:53:29.559 ERROR 42 --- [2.20.43.29:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - inequivalent arg 'type' for exchange 'avitas.nodedata' in vhost 'avitas': received 'topic' but current is 'fanout', class-id=40, method-id=10)

Related

Debezium - Oracle Connector - Service Not Starting

DebeziumEngine looking for kafka topic eventhough i have not specified KafkaOffsetBackingStore for offset.storage
Reference : DebeziumEngine Config
Config
Configuration config = Configuration.create()
.with("name", "oracle_debezium_connector")
.with("connector.class", "io.debezium.connector.oracle.OracleConnector")
.with("offset.storage", "org.apache.kafka.connect.storage.FileOffsetBackingStore")
.with("offset.storage.file.filename", "/Users/dk/Documents/work/ACET/offset.dat")
.with("offset.flush.interval.ms", 2000)
.with("database.hostname", "localhost")
.with("database.port", "1521")
.with("database.user", "pravin")
.with("database.password", "*****")
.with("database.sid", "ORCLCDB")
.with("database.server.name", "mServer")
.with("database.out.server.name", "dbzxout")
.with("database.history", "io.debezium.relational.history.FileDatabaseHistory")
.with("database.history.file.filename", "/Users/dk/Documents/work/ACET/dbhistory.dat")
.with("topic.prefix","cycowner")
.with("database.dbname", "ORCLCDB")
.build();
DebeziumEngine
DebeziumEngine<ChangeEvent<String, String>> engine = DebeziumEngine.create(Json.class)
.using(config.asProperties())
.using(connectorCallback)
.using(completionCallback)
.notifying(record -> {
System.out.println(record);
})
.build();
Error :
2022-10-29T16:06:16,457 ERROR [pool-2-thread-1] i.d.c.Configuration: The 'schema.history.internal.kafka.topic' value is invalid: A value is required
2022-10-29T16:06:16,457 ERROR [pool-2-thread-1] i.d.c.Configuration: The 'schema.history.internal.kafka.bootstrap.servers' value is invalid: A value is required**
2022-10-29T16:06:16,458 INFO [pool-2-thread-1] i.d.c.c.BaseSourceTask: Stopping down connector
2022-10-29T16:06:16,463 INFO [pool-3-thread-1] i.d.j.JdbcConnection: Connection gracefully closed
2022-10-29T16:06:16,465 INFO [pool-2-thread-1] o.a.k.c.s.FileOffsetBackingStore: Stopped FileOffsetBackingStore
connector stopped successfully
---------------------------------------------------
success status: false, message : Unable to initialize and start connector's task class 'io.debezium.connector.oracle.OracleConnectorTask' with config: {connector.class=io.debezium.connector.oracle.OracleConnector, database.history.file.filename=/Users/dkuma416/Documents/work/ACET/dbhistory.dat, database.user=pravin, database.dbname=ORCLCDB, offset.storage=org.apache.kafka.connect.storage.FileOffsetBackingStore, database.server.name=mServer, offset.flush.timeout.ms=5000, errors.retry.delay.max.ms=10000, database.port=1521, database.sid=ORCLCDB, offset.flush.interval.ms=2000, topic.prefix=cycowner, offset.storage.file.filename=/Users/dkuma416/Documents/work/ACET/offset.dat, errors.max.retries=-1, database.hostname=localhost, database.password=********, name=oracle_debezium_connector, database.out.server.name=dbzxout, errors.retry.delay.initial.ms=300, value.converter=org.apache.kafka.connect.json.JsonConverter, key.converter=org.apache.kafka.connect.json.JsonConverter, database.history=io.debezium.relational.history.MemoryDatabaseHistory}, **Error: Error configuring an instance of KafkaSchemaHistory; check the logs for details**

QueuesNotAvailableException: Cannot prepare queue for listener. Either the queue doesn't exist or the broker will not allow us to use it

After updating Rabbit MQ version to 3.8 from 3.7 we start getting the below exception
on micro service start up we are getting
{"timestamp":"2021-01-07T12:41:05.738+00:00","class":"org.springframework.amqp.rabbit.connection.CachingConnectionFactory","thread-id":"main","level":"INFO","type":"createBareConnection","logMessage":"Created new connection: rabbitConnectionFactory#5b5e7036:1/SimpleConnection#1e734eee [delegate=amqp://test#172.40.1.237:5672/, localPort= 40702]"}
{"timestamp":"2021-01-07T12:41:05.740+00:00","class":"com.rabbitmq.client.impl.ForgivingExceptionHandler","thread-id":"AMQP Connection 172.40.1.237:5672","level":"WARN","type":"log","logMessage":"An unexpected connection driver error occured (Exception message: Connection reset)"}
{"timestamp":"2021-01-07T12:41:06.747+00:00","class":"org.springframework.amqp.rabbit.connection.CachingConnectionFactory","thread-id":"AMQP Connection 172.40.1.237:5672","level":"ERROR","type":"log","logMessage":"Channel shutdown: connection error; protocol method: #method<connection.close>(reply-code=503, reply-text=COMMAND_INVALID - unknown exchange type 'x-delayed-message', class-id=40, method-id=10)"}
we are using spring boot
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.3.5.RELEASE</version>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
we are using RabbitListener to bind and create queue
#RabbitListener(bindings = #QueueBinding(
value = #Queue(value = "state.Transaction", durable = "true", autoDelete = "false"),
exchange = #Exchange(value = "state.exchange" , durable = "true"),
key = "state.Transaction"))
rabbitmq:
host: "rabbitmq-1-xyx.internal.xzy.zzz"
password: test
username: test
and this test user has all the super permisions
rabbitmqctl set_permissions -p / test "." "." ".*"

socket.io transport error with websocket transport

I'm trying to set up a socket.io connection, but the connection keeps closing with a transport error message. Both, client and server, are running socket.io v2.0.3.
Client
var _socket = io({
transports: ['websocket'],
query: {
token: userToken,
roomName: getRoomName(),
},
});
messages:
socket.io-client:manager attempting reconnect +5s
socket.io.min.js:1 socket.io-client:manager readyState closed +0ms
socket.io.min.js:1 socket.io-client:manager opening https://video.twoseven.xyz +1ms
socket.io.min.js:1 engine.io-client:socket creating transport "websocket" +1ms
socket.io.min.js:1 engine.io-client:socket setting transport websocket +1ms
socket.io.min.js:1 socket.io-client:manager connect attempt will timeout after 20000 +2ms
socket.io.min.js:2 WebSocket connection to 'wss://video.twoseven.xyz/socket.io/?token=abcd&roomName=us&EIO=3&transport=websocket' failed: Invalid frame header
r.doOpen # socket.io.min.js:2
r.open # socket.io.min.js:2
r.open # socket.io.min.js:1
r # socket.io.min.js:1
r # socket.io.min.js:1
r.open.r.connect # socket.io.min.js:1
(anonymous) # socket.io.min.js:1
socket.io.min.js:1 engine.io-client:socket socket error {"type":"TransportError","description":{"isTrusted":true}} +733ms
socket.io.min.js:1 socket.io-client:manager connect_error +1ms
socket.io.min.js:1 socket.io-client:manager cleanup +0ms
socket.io.min.js:1 socket.io-client:manager reconnect attempt error +1ms
socket.io.min.js:1 socket.io-client:manager will wait 4769ms before reconnect attempt +1ms
socket.io.min.js:1 engine.io-client:socket socket close with reason: "transport error" +0ms
The Chrome developer console reports the following from the network tab:
General
Request URL:wss://video.twoseven.xyz/socket.io/?token=abcd&roomName=us&EIO=3&transport=websocket
Request Method:GET
Status Code:101 Switching Protocols
Response Headers
Connection:upgrade
Date:Thu, 19 Oct 2017 22:36:52 GMT
Sec-WebSocket-Accept:YJ3aZ2L+X+ANa1bJK3ECO/s7XVE=
Sec-WebSocket-Extensions:permessage-deflate
Server:nginx/1.11.8
Upgrade:websocket
Server
const io = new socketio(server, {pingInterval: 3000, pingTimeout: 10000});
io.set('transports', ['websocket']);
messages:
engine handshaking client "yrFJADAHt-QQZsX6AAAA" +0ms
engine:socket sending packet "open" ({"sid":"yrFJADAHt-QQZsX6AAAA","upgrades":[],"pingInterval":3000,"pingTimeout":10000}) +3ms
engine:socket flushing buffer to transport +1ms
engine:ws writing "0{"sid":"yrFJADAHt-QQZsX6AAAA","upgrades":[],"pingInterval":3000,"pingTimeout":10000}" +1ms
engine:transport setting request +1ms
engine:socket sending packet "message" (0) +9ms
engine:socket flushing buffer to transport +19ms
engine:ws writing "40" +1ms
engine:socket sending packet "message" (2["authenticated"]) +91ms
engine:socket flushing buffer to transport +0ms
engine:ws writing "42["authenticated"]" +1ms
engine:socket transport error +136ms
engine:ws closing +2ms
From the server logs, I see that the connection event has been triggered and executes fine except for the last line which says socket.emit('authenticated');. The transport seems to fail at this point

Does Spring Cloud Stream Kafka supports embedded headers?

According to this topic:
Kafka Spring Integration: Headers not coming for kafka consumer -
this is no headers support for Kafka
But documentation says:
spring.cloud.stream.kafka.binder.headers
The list of custom headers that will be transported by the binder.
Default: empty.
I can't get it working with spring-cloud-stream-binder-kafka: 1.2.0.RELEASE
SENDING LOG:
MESSAGE (e23885fd-ffd9-42dc-ebe3-5a78467fee1f) SENT :
GenericMessage [payload=...,
headers={
content-type=application/json,
correlationId=51dd90b1-76e6-4b8d-b667-da25f214f383,
id=e23885fd-ffd9-42dc-ebe3-5a78467fee1f,
contentType=application/json,
timestamp=1497535771673
}]
RECEIVING LOG:
MESSAGE (448175f5-2b21-9a44-26b9-85f093b33f6b) RECEIVED BY HANDLER 1:
GenericMessage [payload=...,
headers={
kafka_offset=36,
id=448175f5-2b21-9a44-26b9-85f093b33f6b,
kafka_receivedPartitionId=0,
contentType=application/json;charset=UTF-8,
kafka_receivedTopic=new_patient, timestamp=1497535771715
}]
MESSAGE (448175f5-2b21-9a44-26b9-85f093b33f6b) RECEIVED BY HANDLER 2 :
GenericMessage [payload=...,
headers={
kafka_offset=36,
id=448175f5-2b21-9a44-26b9-85f093b33f6b,
kafka_receivedPartitionId=0,
contentType=application/json;charset=UTF-8,
kafka_receivedTopic=new_patient, timestamp=1497535771715
}]
I expect to see the same message id and get correlationId on receiving side.
application.properties:
spring.cloud.stream.kafka.binder.headers=correlationId
spring.cloud.stream.bindings.newTest.destination=new_test
spring.cloud.stream.bindings.newTestCreated.destination=new_test
spring.cloud.stream.default.consumer.headerMode=embeddedHeaders
spring.cloud.stream.default.producer.headerMode=embeddedHeaders
SENDING MESSAGE:
#Publisher(channel = "testChannel")
public Object newTest(Object param) {
...
return myObject;
}
Yes, it does: http://docs.spring.io/spring-cloud-stream/docs/Chelsea.SR2/reference/htmlsingle/index.html#_consumer_properties
headerMode
When set to raw, disables header parsing on input. Effective only for messaging middleware that does not support message headers natively and requires header embedding. Useful when inbound data is coming from outside Spring Cloud Stream applications.
Default: embeddedHeaders
But that is already Spring Cloud Stream story, not Spring Kafka per se.

Spring Integration: File Copy functionality with delay

I am using following configuration to copy files from one directory to another directory -
#Bean
public MessageChannel fileInputChannel() {
return new DirectChannel();
}
#Bean
#InboundChannelAdapter(value = "fileInputChannel", poller = #Poller(fixedDelay = "10000") )
public MessageSource<File> fileReadingMessageSource() {
FileReadingMessageSource source = new FileReadingMessageSource();
source.setDirectory(new File("C:/input_dir"));
source.setFilter(new RegexPatternFileListFilter(".*"));
return source;
}
#Bean
#ServiceActivator(inputChannel = "fileInputChannel")
public FileWritingMessageHandler handle() {
FileWritingMessageHandler handler = new FileWritingMessageHandler(new File("C:/Output_dir"));
handler.setDeleteSourceFiles(false);
handler.setExpectReply(false);
handler.setPreserveTimestamp(true);
handler.setAsync(true);
return handler;
}
I am expecting that :
- If there is any change in any source file OR
- If a new file created in source directory
updated OR newly created file will be updated/created in destination folder within 10 seconds. However it is taking more than 1 minute and file size in KBs. Also, source and destination directories are on same machine.
I am not able to identify why it is taking more than 1 minute when I have set #Poller time 10 seconds
logs -
[2017-01-31 10:33:04,943612]INFO [task-scheduler-3] (FileReadingMessageSource.java:367) - Created message: [GenericMessage [payload=C:/input_dir/file.170126.19, headers={timestamp=1485876784308, id=662aaf51-91e5-6f78-a2a6-997fc01b8b79}]]<br>
[2017-01-31 10:33:04,943612]DEBUG[task-scheduler-3] (AbstractPollingEndpoint.java:267) - Poll resulted in Message: GenericMessage [payload=C:/input_dir/file.170126.19, headers={timestamp=1485876784308, id=662aaf51-91e5-6f78-a2a6-997fc01b8b79}]<br>
[2017-01-31 10:33:04,943612]DEBUG[task-scheduler-3] (AbstractMessageChannel.java:411) - preSend on channel 'fileInputChannel', message: GenericMessage [payload=C:/input_dir/file.170126.19, headers={timestamp=1485876784308, id=662aaf51-91e5-6f78-a2a6-997fc01b8b79}]<br>
[2017-01-31 10:33:04,943612]DEBUG[task-scheduler-3] (AbstractMessageHandler.java:115) - handle received message: GenericMessage [payload=C:/input_dir/file.170126.19, headers={timestamp=1485876784308, id=662aaf51-91e5-6f78-a2a6-997fc01b8b79}]<br>
[2017-01-31 10:33:04,943618]DEBUG[task-scheduler-3] (AbstractMessageChannel.java:430) - postSend (sent=true) on channel 'fileInputChannel', message: GenericMessage [payload=C:/input_dir/file.170126.19, headers={timestamp=1485876784308, id=662aaf51-91e5-6f78-a2a6-997fc01b8b79}]<br>
[2017-01-31 10:33:14,953620]INFO [task-scheduler-4] (FileReadingMessageSource.java:367) - Created message: [GenericMessage [payload=C:/input_dir/file.170127.19, headers={timestamp=1485876794316, id=c05deaec-f863-fd7f-0b08-dd3534be81d7}]]<br>
[2017-01-31 10:33:14,953620]DEBUG[task-scheduler-4] (AbstractPollingEndpoint.java:267) - Poll resulted in Message: GenericMessage [payload=C:/input_dir/file.170127.19, headers={timestamp=1485876794316, id=c05deaec-f863-fd7f-0b08-dd3534be81d7}]<br>
[2017-01-31 10:33:14,953620]DEBUG[task-scheduler-4] (AbstractMessageChannel.java:411) - preSend on channel 'fileInputChannel', message: GenericMessage [payload=C:/input_dir/file.170127.19, headers={timestamp=1485876794316, id=c05deaec-f863-fd7f-0b08-dd3534be81d7}]<br>
[2017-01-31 10:33:14,953620]DEBUG[task-scheduler-4] (AbstractMessageHandler.java:115) - handle received message: GenericMessage [payload=C:/input_dir/file.170127.19, headers={timestamp=1485876794316, id=c05deaec-f863-fd7f-0b08-dd3534be81d7}]<br>
[2017-01-31 10:33:14,953626]DEBUG[task-scheduler-4] (AbstractMessageChannel.java:430) - postSend (sent=true) on channel 'fileInputChannel', message: GenericMessage [payload=C:/input_dir/file.170127.19, headers={timestamp=1485876794316, id=c05deaec-f863-fd7f-0b08-dd3534be81d7}]
OK. I got your concern!
Look, #Poller has this property:
/**
* #return The maximum number of messages to receive for each poll.
* Can be specified as 'property placeholder', e.g. {#code ${poller.maxMessagesPerPoll}}.
* Defaults to -1 (infinity) for polling consumers and 1 for polling inbound channel adapters.
*/
String maxMessagesPerPoll() default "";
Pay attention how it defaults to the "1 for polling inbound channel adapters".
So, to poll all your files per one polling task you really should make this property as infinity:
#Poller(maxMessagesPerPoll = "-1")

Resources