I have a simple scenario of publish a message to a SNS topic, which is subscribed by a SQS queue, but somehow the queue never receives the messages (i.e., not show up in the SQS console). Here are the codes in Ruby:
sns = Aws::SNS::Client.new;
sqs = Aws::SQS::Client.new;
q1 = sqs.create_queue({queue_name: "queue1”});
t1 = sns.create_topic({name: "topic1"});
q1_attr = sqs.get_queue_attributes({queue_url: q1.queue_url,attribute_names: ["All"]});
s1 = sns.subscribe({topic_arn: t1.topic_arn, protocol: "sqs", endpoint: q1_attr.attributes['QueueArn']});
resp = sns.publish({topic_arn: t1.topic_arn, message: "Test message"});
Is there anything missing?
The second quote in this line:
q1 = sqs.create_queue({queue_name: "queue1”});
is a 'fancy' quote ” instead of ". Change it to this:
q1 = sqs.create_queue({queue_name: "queue1"});
Related
I have read all salient posts and the zeromq guide on the topic of pub/sub late joiner, and hopefully I simply missed the answer, but just in case I haven't, I have two questions about the zeromq proxy in context to the late joiner:
Does the zeromq proxy with the suggested XSUB/XPUB sockets also suffer from the late joiner problem, ie. are the first few pub messages of a new publisher dropped ?
If so, what is the accepted solution to ensure that even the first published message is received by subscribers with matching topic (my latest info is to sleep...)
I don't believe it is pertinent to the question, but just in case: here are
My proxy's run method, which runs in its own thread; it starts a capture thread if DEBUG is true to log all messages; if no matching subscription exists, nothing is captured.
The proxy works fine including the capture socket. However, I am now adding functionality where a new publisher is started in a thread and will immediately start to publish messages. Hence my question, if a message is published straight away, will it be dropped ?
def run(self):
msg = debug_log(self, f"{self.name} thread running...")
debug_log(self, msg + "binding sockets...")
xpub = self.zmq_ctx.socket(zmq.XPUB)
xpub.bind(sys_conf.system__zmq_broker_xpub_addr)
xsub = self.zmq_ctx.socket(zmq.XSUB)
xsub.bind(sys_conf.system__zmq_broker_xsub_addr)
if self.loglevel == DEBUG and sys_conf.system__zmq_broker_capt_addr:
debug_log(self, msg + " done, now starting broker with message "
"capture and logging.")
capt = self.zmq_ctx.socket(zmq.PAIR)
capt.bind(sys_conf.system__zmq_broker_capt_addr)
capt_th = Thread(
name=self.name + "_capture_thread", target=self.capture_run,
args=(self.zmq_ctx,), daemon=True
)
capt_th.start()
capt.recv() # wait for peer thread to start and check in
debug_log(self, msg + "capture peer synchronised, proceeding.")
zmq.proxy(xpub, xsub, capt) # blocks until thread terminates
else:
debug_log(self, msg + " starting broker.")
zmq.proxy(xpub, xsub) # blocks until thread terminates
def capture_run(self, ctx):
""" capture socket's thread's run method.
debug_log(self, f"{self.name} capture thread running.")
sock = ctx.socket(zmq.PAIR)
sock.connect(sys_conf.system__zmq_broker_capt_addr)
sock.send(b"") # ack message to calling thread
while True:
try: # assume we're getting topic string followed by python object
topic = sock.recv_string()
obj = sock.recv_pyobj()
except Exception: # if not simply log message in bytes
topic = None
obj = sock.recv()
debug_log(self, f"{self.name} capture_run received topic {topic}, "
f"obj {obj}.")
My users of the proxy (they are all both subscribers and publishers) in different threads and/or processes from the proxy:
...
# establish zmq subscriber socket and connect to broker
self._evm_subsock = self._zmq_ctx.socket(zmq.SUB)
self.subscribe_topics()
self._evm_subsock.connect(sys_conf.system__zmq_broker_xpub_addr)
# establish pub socket and connect to broker
self._evm_pub_sock = self._zmq_ctx.socket(zmq.PUB)
self._evm_pub_sock.connect(sys_conf.system__zmq_broker_xsub_addr)
debug_log(self, msg + " Connected to pub/sub broker.")
...
Regarding: “Sending response back to the out/reply queue.”
There is a requirement to send the response back to a different queue (reply queue).
While sending the response, we have to use the correlation and message id from the request message and pass it to the reply queue as header. I suspect the format of correlation/message id is wrong.
While reading the message, the correlation id and message id format are as below:
MessageId = “ID:616365323063633033343361313165646139306638346264”
CorrelationId = “ID:36626161303030305f322020202020202020202020202020”
While sending the back to out/reply queue, we are passing these ids as below:
ITextMessage txtReplyMessage = sessionOut.CreateTextMessage();
txtReplyMessage.JMSMessageID = “616365323063633033343361313165646139306638346264”;
txtReplyMessage.JMSCorrelationID = “36626161303030305f322020202020202020202020202020”;
txtReplyMessage.Text = sentMessage.Contents;
txtReplyMessage.JMSDeliveryMode = DeliveryMode.NonPersistent;
txtReplyMessage.JMSPriority = sentMessage.Priority;
messagePoducerOut.Send(txtReplyMessage);
Please note:
With the XMS.NET library, we need to pass the correlation and message id in string format as per shown above
With MQ API’s (which we were using earlier) passing the correlation and message ids we use to send in bytes format like below:
MQMessage queueMessage = new MQMessage();
string[] parms = document.name.Split('-');
queueMessage.MessageId = StringToByte(parms[1]);
queueMessage.CorrelationId = StringToByte(parms[2]);
queueMessage.CharacterSet = 1208;
queueMessage.Encoding = MQC.MQENC_NATIVE;
queueMessage.Persistence = 0; // Do not persist the replay message.
queueMessage.Format = "MQSTR ";
queueMessage.WriteString(document.contents);
queueOut.Put(queueMessage);
queueManagerOut.Commit();
Please help to troubleshoot the problem.
Troubleshooting is a bit difficult because you haven’t clearly specified the trouble (is there an exception, or is the message just not be correlated successfully?).
In your code you have missed to add the “ID:” prefix. However, to address the requirements, you should not need to bother too much about what is in this field, because you simply need to copy one value to the other:
txtReplyMessage.JMSCorrelationID = txtRequestMessage.JMSMessageID
A bit unclear what the issue is. Are you able to run the provided examples in the MQ tools/examples? This approach uses tmp queues(AMQ.*) as JMSReplyTo
Start the "server" application first.
Request/Response Client: "SimpleRequestor"
Request/Response Server: "SimpleRequestorServer"
You can find the exmaples at the default install location(win):
"C:\Program Files\IBM\MQ\tools\dotnet\samples\cs\xms\simple\wmq"
The "SimpleMessageSelector" will show how to use the selector pattern.
Note the format on the selector: "JMSCorrelationID = '00010203040506070809'"
IBM MQ SELECTOR
I've got this code, which when I type !snapshot should log the last 100 messages in the channel, and make them into a text file:
snapshot_channel = (my snapshot channel ID)
#commands.has_permissions(administrator=True)
#bot.command()
async def snapshot(ctx):
channel = bot.get_channel(snapshot_channel)
await ctx.message.delete()
messages = message in ctx.history(limit=100)
numbers = "\n".join(
f"{message.author}: {message.clean_content}" for message in messages
)
f = BytesIO(bytes(numbers, encoding="utf-8"))
file = discord.File(fp=f, filename="snapshot.txt")
await channel.send("Snapshot requested has completed.")
await channel.send(file=file)
I've got BytesIO imported, and it works fine for a different command which purges messages and logs them, but this code which should just make a log and then send it in the channel doesn't work. Please can you send me what it should look like for it to work. Thanks!
TextChannel.history is an async generator, you're not using it properly
messages = await ctx.channel.history(limit=100).flatten()
numbers = "\n".join([f"{message.author}: {message.clean_content}" for message in messages])
Another option would be
numbers = ""
async for message in ctx.channel.history(limit=100):
numbers += f"{message.author}: {message.clean_content}"
I'm currently running some tests on the latest google-cloud-pubsub==0.35.4 pubsub release. My intention is to process a never ending stream (variating in load) using a dynamic amount of subscriber clients.
However, when i have a queue of say.. 600 messages and 1 client running and then add additional clients:
Expected: All remaining messages get distributed evenly across all clients
Observed: Only new messages are distributed across clients, any older messages are send to pre-existing clients
Below is simplified version of what i use for my clients (for reference we'll only be running the low priority topic).
I won't include the publisher since it has no relation.
PRIORITY_HIGH = 1
PRIORITY_MEDIUM = 2
PRIORITY_LOW = 3
MESSAGE_LIMIT = 10
ACKS_PER_MIN = 100.00
ACKS_RATIO = {
PRIORITY_LOW: 100,
}
PRIORITY_TOPICS = {
PRIORITY_LOW: 'test_low',
}
PRIORITY_SEQUENCES = {
PRIORITY_LOW: [PRIORITY_LOW, PRIORITY_MEDIUM, PRIORITY_HIGH],
}
class Subscriber:
subscriber_client = None
subscriptions = {}
priority_queue = defaultdict(Queue.Queue)
priorities = []
def __init__(self):
logging.basicConfig()
self.subscriber_client = pubsub_v1.SubscriberClient()
for option, percentage in ACKS_RATIO.iteritems():
self.priorities += [option] * percentage
def subscribe_to_topic(self, topic, max_messages=10):
self.subscriptions[topic] = self.subscriber_client.subscribe(
BASE_TOPIC_PATH.format(project=PROJECT, topic=topic,),
self.process_message,
flow_control=pubsub_v1.types.FlowControl(
max_messages=max_messages,
),
)
def un_subscribe_from_topic(self, topic):
subscription = self.subscriptions.get(topic)
if subscription:
subscription.cancel()
del self.subscriptions[topic]
def process_message(self, message):
json_message = json.loads(message.data.decode('utf8'))
self.priority_queue[json_message['priority']].put(message)
def retrieve_message(self):
message = None
priority = random.choice(self.priorities)
ack_priorities = PRIORITY_SEQUENCES[priority]
for ack_priority in ack_priorities:
try:
message = self.priority_queue[ack_priority].get(block=False)
break
except Queue.Empty:
pass
return message
if __name__ == '__main__':
messages_acked = 0
pub_sub = Subscriber()
pub_sub.subscribe_to_topic(PRIORITY_TOPICS[PRIORITY_LOW], MESSAGE_LIMIT * 3)
while True:
msg = pub_sub.retrieve_message()
if msg:
json_msg = json.loads(msg.data.decode('utf8'))
msg.ack()
print ("%s - Akked Priority %s , High %s, Medium %s, Low %s" % (
datetime.datetime.now().strftime('%H:%M:%S'),
json_msg['priority'],
pub_sub.priority_queue[PRIORITY_HIGH].qsize(),
pub_sub.priority_queue[PRIORITY_MEDIUM].qsize(),
pub_sub.priority_queue[PRIORITY_LOW].qsize(),
))
time.sleep(60.0 / ACKS_PER_MIN)
I'm wondering if this behaviour as inherent to how streaming pulls function or if there are configurations that can alter this behaviour.
Cheers!
Considering the Cloud Pub/Sub documentation, Cloud Pub/sub delivers each published message at least once for every subscription, nevertheless there are some exception to this behavior:
A message that cannot be delivered within the maximum retention time of 7 days is deleted.
A message published before a given subscription was created will not be delivered.
In other words, the service will deliver the messages to the subscriptions created before the message was published, therefore, the old messages will not be available for new subscriptions. As far as I know, Cloud Pub/Sub does not offer a feature to change this behavior.
I need to setup a client which will send sqs to a server:
client side:
...
sqs = AWS::SQS.new
q = sqs.queues.create("q_name")
m = q.send_message("meta")
...
but how the server could read the message of the client?
Thank you in advance.
First you need to have your server connect to SQS then you can get your queue.
Do a get_messages on your queue. Go to boto docs to get more information on the attributes. This will give you 1 to 10 message objects based on your parameters. Then on each of those objects do a get_body() then you'll have the string of the message.
Here's a simple example in python. Sorry don't know ruby.
sqsConn = connect_to_region("us-west-1", # this is the region you created the queue in
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY)
QUEUE = sqsConn.get_queue("my-queue") # the name of your queue
msgs = QUEUE.get_messages(num_messages=10, # try and get 10 messages
wait_time_seconds=1, # wait 1 second for these messages
visibility_timeout=10) # keep them visible for 10 seconds
body = msgs[0].get_body() # get the string from the first object
Hope this helps.