celery task with routing key on SQS - django-celery

I have a django celery set up, and I'm switching over from rabbitmq to use SQS as the broker. I'm finding that my tasks decorated with a routing_key value are not producing messages in the broker?
Set up is:
CELERY_QUEUES = {
"default": {
"exchange": "default",
"binding_key": "default"},
"sentry": {
"exchange": "default",
"binding_key": "sentry"},
}
CELERY_DEFAULT_QUEUE = "default"
CELERY_DEFAULT_EXCHANGE = "default"
CELERY_DEFAULT_EXCHANGE_TYPE = "direct"
CELERY_DEFAULT_ROUTING_KEY = "default"
task is defined as
#task(routing_key = 'sentry', ignore_result = True)
def doSomething():
print "Hello"
doSomething.delay() # No message is produced
Everything routes fine into the default queue. Unusually everything works fine with rabbit mq?
The Amazon SQS console is showing the queue 'sentry', but no messages are sent to it (I'm not sure what created the queue)
Bonus: unusually, when I first tried this out (about 4 hours ago), some messages did appear to make it to the sentry queue? What could have possibly caused this?
Thanks

I tried your example with one change. Instead of defining the routing key in the task decorator, I defined the queue in the call to the task. And I changed the call from "delay" to "apply_async"
So my code looks like
#task
def doSomething():
print "Hello"
doSomething.apply_async(queue='sentry')

Related

Event bus name not registering when attempting to connect eventbridge and lambda using terraform

I am attempting to create an Eventbridge that will get notifications from Datadog, and trigger a lambda function to store the notifications to an S3 bucket. This is all going to be done through Terraform.
The following is the code I have written:
###################################
# Eventbridge Integration to Lambda
###################################
data "aws_cloudwatch_event_source" "datadog_event_source" {
name_prefix = var.spog_event_bus_name # aws.partner/datadog.com/my_eventbus
}
resource "aws_cloudwatch_event_bus" "datadog_event_bus" {
name = data.aws_cloudwatch_event_source.datadog_event_source.name
event_source_name = data.aws_cloudwatch_event_source.datadog_event_source.name
}
resource "aws_cloudwatch_event_rule" "spog_cloudwatch_rule" {
name = "spog_cloudwatch_rule"
event_bus_name = aws_cloudwatch_event_bus.datadog_event_bus.name
event_pattern = <<EOF
{
"account": [
"${var.aws_account_id}"
]
}
EOF
}
resource "aws_cloudwatch_event_target" "spog_cloudwatch_event_target" {
rule = aws_cloudwatch_event_rule.spog_cloudwatch_rule.name
target_id = aws_lambda_function.write_datadog_events.function_name
arn = aws_lambda_function.write_datadog_events.arn
}
resource "aws_lambda_permission" "spog_allow_cloudwatch" {
statement_id = "AllowExecutionFromCloudWatch"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.write_datadog_events.function_name
principal = "events.amazonaws.com"
source_arn = aws_cloudwatch_event_rule.spog_cloudwatch_rule.arn
}
Here, the write_datadog_events is the lambda function to store the notifications.
The build succeeds, but when I try to apply the plan, I get an error saying that "validationException: EventBus name starting with 'aws.' is not valid". From inspecting the aws console, it seems that the actual eventbridge and rule are created successfully, but the event bus name is not registered properly on the eventbridge. The event bus name only says default, and I thought by changing the event_bus_name value within aws_cloudwatch_event_rule, it would not be default, but I was wrong.
Can anyone help me out with this. The lambda function itself is not wrong (since I ran a test case on it), it seems that the core issue is eventbridge not registering my event bus name. Also, although the rule is generated, the event bus is never generated.
Thanks for your help.

AWS lambda missing events from dynamodb stream

I am consistently missing lambda triggers from a dynamodb stream. I am mapping the dynamodb stream to the lambda function with event_source_mapping resource.
resource "aws_lambda_event_source_mapping" "history" {
event_source_arn = "${data.aws_dynamodb_table.auditlogDynamoDB.stream_arn}"
function_name = "${module.cx-clientcomm-history-lambda.lambda_function_arn}"
starting_position = "LATEST"
maximum_retry_attempts = 0
maximum_record_age_in_seconds = 604800 // adding to explicitly set. QA plan was providing invalid value
#(Optional) An Amazon SQS queue or Amazon SNS topic destination for failed records.
destination_config {
on_failure {
destination_arn = "${data.aws_sns_topic.cx-clientcomm-sns-slack.arn}"
}
}
}
It appears when there are entries in dynamodb are coming it very close to each other, some are not picked up by the event_source_mapping and not sent to the lambda.
Is there a way to make sure this doesn't happen? This needs to be very reliable

MassTransit endpoint name is ignored in ConsumerDefinition

The EndpointName property in a ConsumerDefinition file seems to be ignored by MassTransit. I know the ConsumerDefinition is being used because the retry logic works. How do I get different commands to go to a different queue? It seems that I can get them all to go through one central queue but I don't think this is best practice for commands.
Here is my app configuration that executes on startup when creating the MassTransit bus.
Bus.Factory.CreateUsingAzureServiceBus(cfg =>
{
cfg.Host(_config.ServiceBusUri, host => {
host.SharedAccessSignature(s =>
{
s.KeyName = _config.KeyName;
s.SharedAccessKey = _config.SharedAccessKey;
s.TokenTimeToLive = TimeSpan.FromDays(1);
s.TokenScope = TokenScope.Namespace;
});
});
cfg.ReceiveEndpoint("publish", ec =>
{
// this is done to register all consumers in the assembly and to use their definition files
ec.ConfigureConsumers(provider);
});
And my handler definition in the consumer (an azure worker service)
public class CreateAccessPointCommandHandlerDef : ConsumerDefinition<CreateAccessPointCommandHandler>
{
public CreateAccessPointCommandHandlerDef()
{
EndpointName = "specific";
ConcurrentMessageLimit = 4;
}
protected override void ConfigureConsumer(
IReceiveEndpointConfigurator endpointConfigurator,
IConsumerConfigurator<CreateAccessPointCommandHandler> consumerConfigurator
)
{
endpointConfigurator.UseMessageRetry(r =>
{
r.Immediate(2);
});
}
}
In my app that is sending the message I have to configure it to send to the "publish" queue, not "specific".
EndpointConvention.Map<CreateAccessPointsCommand>(new Uri($"queue:specific")); // does not work
EndpointConvention.Map<CreateAccessPointsCommand>(new Uri($"queue:publish")); // this does work
Because you are configuring the receive endpoint yourself, and giving it the name publish, that's the receive endpoint.
To configure the endpoints using the definitions, use:
cfg.ConfigureEndpoints(provider);
This will use the definitions that were registered in the container to configure the receive endpoints, using the consumer endpoint name defined.
This is also explained in the documentation.

Akka.Net Clustering Simple Explanation

I try to do a simple cluster using akka.net.
The goal is to have a server receiving request and akka.net processing it through it cluster.
For testing and learning I create a simple WCF service that receive a math equation and I want to send this equation to be solved.
I have one project server and another client.
The configuration on the server side is :
<![CDATA[
akka {
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
debug {
receive = on
autoreceive = on
lifecycle = on
event-stream = on
unhandled = on
}
deployment {
/math {
router = consistent-hashing-group #round-robin-pool # routing strategy
routees.paths = [ "/user/math" ]
virtual-nodes-factor = 8
#nr-of-instances = 10 # max number of total routees
cluster {
enabled = on
max-nr-of-instances-per-node = 2
allow-local-routees = off
use-role = math
}
}
}
}
remote {
helios.tcp {
transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
port = 8081
hostname = "127.0.0.1"
}
}
cluster {
seed-nodes = ["akka.tcp://ClusterSystem#127.0.0.1:8081"] # address of seed node
}
}
]]>
On the Client side the configuration is like this :
<![CDATA[
akka {
actor.provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
remote {
log-remote-lifecycle-events = DEBUG
log-received-messages = on
helios.tcp {
transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
port = 0
hostname = 127.0.0.1
}
}
cluster {
seed-nodes = ["akka.tcp://ClusterSystem#127.0.0.1:8081"] # address of the seed node
roles = ["math"] # roles this member is in
}
actor.deployment {
/math {
router = round-robin-pool # routing strategy
routees.paths = ["/user/math"]
nr-of-instances = 10 # max number of total routees
cluster {
enabled = on
allow-local-routees = on
use-role = math
max-nr-of-instances-per-node = 10
}
}
}
}
]]>
The cluster connection seems to correctly be made. I see the status [UP] and the association with the role "math" that appeared on the server side.
Event follwing the example on the WebCramler, I don't achieved to make a message to be delivered. I always get a deadletters.
I try like this :
actor = sys.ActorOf(Props.Empty.WithRouter(FromConfig.Instance), "math");
or
var actor = sys.ActorSelection("/user/math");
Does someone know a good tutorial or could help me ?
Thanks
Some remarks:
First: assuming your sending work from the server to the client. Then you are effectively remote deploying actors on your client.
Which means only the server node needs the actor.deployment config section.
The client only needs the default cluster config (and your role setting ofcourse).
Second: Try to make it simpler first. Use a round-robin-pool instead. Its much simpler. Try to get that working. And work your way up from there.
This way its easier to eliminate configuration/network/other issues.
Your usage: actor = sys.ActorOf(Props.Empty.WithRouter(FromConfig.Instance), "math"); is correct.
A sample of how your round-robin-pool config could look:
deployment {
/math {
router = round-robin-pool # routing strategy
nr-of-instances = 10 # max number of total routees
cluster {
enabled = on
max-nr-of-instances-per-node = 2
allow-local-routees = off
use-role = math
}
}
}
Try this out. And let me know if that helps.
Edit:
Ok after looking at your sample. Some things i changed
ActorManager->Process: Your creating a new router actor per request. Don't do that. Create the router actor once, and reuse the IActorRef.
Got rid of the minimal cluster size settings in the MathAgentWorker project
Since your not using remote actor deployment. I changed the round-robin-pool to a round-robin-group.
After that it worked.
Also remember that if your using the consistent-hashing-group router you need to specify the hashing key. There are various ways to do that, in your sample i think the easiest way would be to wrap the message your sending to your router in a ConsistentHashableEnvelope. Check the docs for more information.
Finally the akka deployment sections looked like this:
deployment {
/math {
router = round-robin-group # routing strategy
routees.paths = ["/user/math"]
cluster {
enabled = on
allow-local-routees = off
use-role = math
}
}
}
on the MathAgentWorker i only changed the cluster section which now looks like this:
cluster {
seed-nodes = ["akka.tcp://ClusterSystem#127.0.0.1:8081"] # address of the seed node
roles = ["math"] # roles this member is in
}
And the only thing that the ActorManager.Process does is:
return await Program.Instance.RouterInstance.Ask<TResult>(msg, TimeSpan.FromSeconds(10));

Subscribing to a removed queue with spring-websocket and RabbitMQ broker (Queue NOT_FOUND)

I have a spring-websocket (4.1.6) application on Tomcat8 that uses a STOMP RabbitMQ (3.4.4) message broker for messaging. When a client (Chrome 47) starts the application, it subscribes to an endpoint creating a durable queue. When this client unsubscribes from the endpoint, the queue will be cleaned up by RabbitMQ after 30 seconds as defined in a custom made RabbitMQ policy. When I try to reconnect to an endpoint that has a queue that was cleaned up, I receive the following exception in the RabbitMQ logs: "NOT_FOUND - no queue 'position-updates-user9zm_szz9' in vhost '/'\n". I don't want to use an auto-delete queue since I have some reconnect logic in case the websocket connection dies.
This problem can be reproduced by adding the following code to the spring-websocket-portfolio github example.
In the container div in the index.html add:
<button class="btn" onclick="appModel.subscribe()">SUBSCRIBE</button>
<button class="btn" onclick="appModel.unsubscribe()">UNSUBSCRIBE</button>
In portfolio.js replace:
stompClient.subscribe("/user/queue/position-updates", function(message) {
with:
positionUpdates = stompClient.subscribe("/user/queue/position-updates", function(message) {
and also add the following:
self.unsubscribe = function() {
positionUpdates.unsubscribe();
}
self.subscribe = function() {
positionUpdates = stompClient.subscribe("/user/queue/position-updates", function(message) {
self.pushNotification("Position update " + message.body);
self.portfolio().updatePosition(JSON.parse(message.body));
});
}
Now you can reproduce the problem by:
Launch the application
click unsubscribe
delete the position-updates queue in the RabbitMQ console
click subscribe
Find the error message in the websocket frame via the chrome devtools and in the RabbitMQ logs.
reconnect logic in case the websocket connection dies.
and
no queue 'position-updates-user9zm_szz9' in vhost
Are fully different stories.
I'd suggest you implement "re-subscribe" logic in case of deleted queue.
Actually that is how STOMP works: it creates auto-deleted (generated) queue for the subscribe and yes, it is removed on the unsubscrire.
See more info in the RabbitMQ STOMP Adapter Manual.
From other side consider to subscribe to the existing AMQP queue:
To address existing queues created outside the STOMP adapter, destinations of the form /amq/queue/<name> can be used.
The problem is Stomp won't recreate the queue if it get's deleted by the RabbitMQ policy. I worked around it by creating the queue myself when the SessionSubscribeEvent is fired.
public void onApplicationEvent(AbstractSubProtocolEvent event) {
if (event instanceof SessionSubscribeEvent) {
MultiValueMap nativeHeaders = (MultiValueMap)event.getMessage().getHeaders().get("nativeHeaders");
List destination = (List)nativeHeaders.get("destination");
String queueName = ((String)destination.get(0)).substring("/queue/".length());
try {
Connection connection = connectionFactory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(queueName, true, false, false, null);
} catch (IOException e) {
e.printStackTrace();
}
}
}

Resources