I am using aiokafka to produce messages asynchronously. I have an Api using django which is producing messages to kafka queue. It was working fine. Now When I have converted the same api to use aiohttp server then following error is coming:-
aiokafka.errors.ProducerClosed: ProducerClosed
First message is getting produced successfully. Above error is coming on 2nd message production.
loop = asyncio.get_event_loop()
producer = AIOKafkaProducer(
loop=loop,
bootstrap_servers="127.0.0.1:9092"
)
await producer.start()
response = await producer.send_and_wait(queue_name, msg)
await producer.stop()
There is no information regarding this error in aiokafka docs. Please help.
Edit:
I am shaing this producer among handers. If I leave the producer open, will it cause any issues? When the producer will be closed automaticaly?
aiokafka.errors.ProducerClosed: ProducerClosed
This error occurs when a message is sent to a closed producer.
If you share producer among handlers, make sure that you don't close it after the first message is produced.
Edit: you can close it in cleanup context
async def kafka(app):
await producer.start()
yield
await producer.stop()
app.cleanup_ctx.append(kafka)
Without it, all connections will try to close
Related
I know that I can get a message object using await ctx.fetch_message(mesId). Although If I send a message, and then start the bot's session (Restart the Client). The script cannot see the message. Is there any way to get rid of this problem?
Also, it's worth mentioning that I use discord.Bot type user not discord.Client
First of all there is no ctx.fetch_message ref
It should be ctx.channel.fetch_message Keep in mind you can get another channel by using await bot.get_channel(ID) and then fetch the message.
Your code should look like this:
# using another channel
channel = await bot.get_channel(123456)
message = await channel.fetch_message(123456)
# or using ctx
message = await ctx.channel.fetch_message(123456)
Docs
If I acknowledge the same message twice using the Delivery.Ack method, my consumer channel just closes by itself.
Is this expected behaviour? Has anyone experienced this ?
The reason I am acknowledging the same message twice is a special case where I have to break the original message into copies and process them on the consumer. Once the consumer processes everything, it loops and acks everything. Since there are copies of the entity, it acks the same message twice and my consumer channel shuts down
According to the AMQP reference, a channel exception is raised when a message gets acknowledged for the second time:
A message MUST not be acknowledged more than once. The receiving peer
MUST validate that a non-zero delivery-tag refers to a delivered
message, and raise a channel exception if this is not the case.
Second call to Ack(...) for the same message will not return an error, but the channel gets closed due to this exception received from server:
Exception (406) Reason: "PRECONDITION_FAILED - unknown delivery tag ?"
It is possible to register a listener via Channel.NotifyClose to observe this exception.
I've been working with the Message Hub sample code found at this link: https://github.com/ibm-messaging/message-hub-samples
In particular, I've been trying to increase the throughput of the producer with the Kafka Java console example. I noticed the documentation in this snippet of code:
// Synchronously wait for a response from Message Hub / Kafka on every message produced.
// For high throughput the future should be handled asynchronously.
RecordMetadata recordMetadata = future.get(5000, TimeUnit.MILLISECONDS);
producedMessages++;
I've already turned off the thread sleep found later in the code which also helped increase the throughput, but I was hoping I could get some help on implementing the future asynchronously in this block. Thanks in advance!
you have two basic options for handling the outcome of a produce request asynchronously
1) use the overloaded send with a completion callback argument, which will be invoked asynchronously:
public Future<RecordMetadata> send(ProducerRecord<K, V> record, Callback callback);
if using the callback you may ignore the future.
2) pass the Future to some other thread you have created, and have it inspect the future for completion, while leaving the thread that calls send free to carry on.
I am having problems with twisted.internet.reactor All my clients have completely identical environments, but only some experience this problem:
They correctly connectTCP to the server via ws and exchange first several messages. About one minute in, they should send a message to the server via
def execute(self, message, callback=None):
print(">>>", message, flush=True)
reactor.callFromThread(self._client_protocol_instance.send, message, callback)
self._client_protocol_instance.send method is defined as follows:
def send(self, command, callback):
print("send", command, callback, flush=True)
timestamp = int(time() * 1000000)
msg = (command.strip() + " --timestamp:" + str(timestamp))
if _self._debug:
_self._commands[str(timestamp)] = msg
if callback is not None:
_self._callbacks[str(timestamp)] = callback
payload = msg.encode()
_self._status_controller.set_state(payload)
self.sendMessage(payload)
First print shows up in stdout, but second one doesn't. I assume that send doesn't get executed. After reactor.run(), this is the only reference to the reactor in the entire program.
Killing client's process after this happens is immediately detected by the server, so the connection was still alive at that time.
What could be causing this?
I found the solution, the problem lied with the fact that the previous task wouldn't finish sometimes by the time it tried to send the message.
I solved it by moving all cpu-heavy response handling logic into threads to free up the reactor for other messages.
In my application, I have a queue (HornetQ) set up on JBoss 7 AS.
I have used Spring batch to do some work once the messages is received (save values in database etc.) and then the consumer commits the JMS session.
Sometimes when there is an exception while processing the message, the excecution of consumer is aborted abruptly.
And the message remains in "in delivery" state. There are about 30 messages in this state on my production queue.
I have tried restarting the consumer but the state of these messages is not changed. The only way to remove these
messages from the queue is to restart the queue. But before doing that I want a way to read these messages so
that they can be corrected and sent to the queue again to be processed.
I have tried using QueueBrowser to read them but it does not work. I have searched a lot on Google but could not
find any way to read these messages.
I am using a Transacted session, where once the message is processed, I am calling:
session.commit();
This sends the acknowledgement.
I am implementing spring's
org.springframework.jms.listener.SessionAwareMessageListener
to recieve messages and then to process them.
While processing the messages, I am using spring batch to insert some data in database.
For a perticular case, it tries to insert data too big to be inserted in a column.
It throws an exception and transaction is aborted.
Now, I have fixed my producer and consumer not to have such data, so that this case should not happen again.
But my question is what about the 30 "in delivery" state messages that are in my production queue? I want to read them so that they can be corrected and sent to the queue again to be processed. Is there any way to read these messages? Once I know their content, I can restart the queue and submit them again (after correcting them).
Thanking you in anticipation,
Suvarna
It all depends on the Transaction mode you are using.
for instance if you use transactions:
// session here is a TX Session
MessageConsumer cons = session.createConsumer(someQueue);
session.start();
Message msg = consumer.receive...
session.rollback(); // this will make the messages to be redelivered
if you are using non TX:
// session here is auto-ack
MessageConsumer cons = session.createConsumer(someQueue);
session.start();
// this means the message is ACKed as we receive, doing autoACK
Message msg = consumer.receive...
//however the consumer here could have a buffer from the server...
// if you are not using the consumer any longer.. close it
consumer.close(); // this will release messages on the client buffer
Alternatively you could also set consumerWindowSize=0 on the connectionFactory.
This is on 2.2.5 but it never changed on following releases:
http://docs.jboss.org/hornetq/2.2.5.Final/user-manual/en/html/flow-control.html
I"m covering all the possibilities I could think of since you're not being specific on how you are consuming. If you provide me more detail then I will be able to tell you more:
You can indeed read your messages in the queue using jmx (with for example jconsole)
In Jboss As7 you can do it the following way :
MBeans>jboss.as>messaging>default>myJmsQueue>Operations
listMessagesAsJson
[edit]
Since 2.3.0 You have a dedicated method for this specific case :
listDeliveringMessages
See https://issues.jboss.org/browse/HORNETQ-763