I have the following test code:
using (ShimsContext.Create())
{
// act
sut.MethodCall();
}
The SUT has the following method (for MethodCall):
Dim mq As New MSMQ.MessageQueue(messageQPath)
mq.Send(mqMsg)
But I'm getting following error:
"The queue does not exist or you do not have sufficient permissions to perform the operation."
Obviously the queue won't exist and I won't have sufficient permissions if I don't have a queue created on the fake message queue. Has anyone got any experience with working with MSMQ and Fakes so that the call to the MSMQ send is basically a no operation which I can verify?
The shim needs to be set-up like so:
ShimMessageQueue.AllInstances.SendObject = (m, o) =>
{
// verification code here
};
As Fakes doesn't have concept of verifying a call directly using the framework, you just put the verification code inside the lambda for the SendObject call.
Related
I want to be able to call an HTTP endpoint (that I own) from an Azure Function at the end of the Azure Function request.
I do not need to know the result of the request
If there is a problem in the HTTP endpoint that is called I will log it there
I do not want to hold up the return to the client calling the initial Azure Function
Offloading the call of the secondary WebApi onto a background job queue is considered overkill for this requirement
Do I simply call HttpClient.PutAsync without an await?
I realise that the dependencies I have used up until the point that the call is made may well not be available when the call returns. Is there a safe way to check if they are?
My answer may cause some controversy but, you can always start a background task and execute it that way.
For anyone reading this answer, this is far from recommended. The OP has been very clear that they don't care about exceptions or understanding what sort of result the request is returning ...
Task.Run(async () =>
{
using (var httpClient = new HttpClient())
{
await httpClient.PutAsync(...);
}
});
If you want to ensure that the call has fired, it may be worth waiting for a second or two after the call is made to ensure it's actually on it's way.
await Task.Delay(1000);
If you're worried about dependencies in the call, be sure to construct your payload (i.e. serialise it, etc.) external to the Task.Run, basically, minimise any work the background task does.
My Lambda function is required to send a token back to the step function for it to continue, as it is a task within the state machine.
Looking at my try/catch block of the lambda function, I am contemplating:
The order of SendTaskHeartbeatCommand and SendTaskSuccessCommand
The required parameters of SendTaskHeartbeatCommand
Whether I should add the SendTaskHeartbeatCommand to the catch block, and then if yes, which order they should go in.
Current code:
try {
const magentoCallResponse = await axios(requestObject);
await stepFunctionClient.send(new SendTaskHeartbeatCommand(taskToken));
await stepFunctionClient.send(new SendTaskSuccessCommand({output: JSON.stringify(magentoCallResponse.data), taskToken}));
return magentoCallResponse.data;
} catch (err: any) {
console.log("ERROR", err);
await stepFunctionClient.send(new SendTaskFailureCommand({error: JSON.stringify("Error Sending Data into Magento"), taskToken}));
return false;
}
I have read the documentation for AWS SDK V3 for SendTaskHeartbeatCommand and am confused with the required input.
The SendTaskHeartbeat and SendTaskSuccess API actions serve different purposes.
When your task completes, you call SendTaskSucces to report this back to Step Functions and to provide the results from the Task that your workflow can then process. You do not need to call SendTaskHeartbeat before SendTaskSuccess and the usage you have in the code above seems unnecessary.
SendTaskHeartbeat is optional and you use it when you've set "HeartbeatSeconds" on your Task. When you do this, you then need your worker (i.e. the Lambda function in this case) to send back regular heartbeats while it is processing work. I'd expect that to be running asynchronously while your code above was running the first line in the try block. The reason for having heartbeats is that you can set a longer TimeoutSeconds (or dynamically using TimeoutSecondsPath) than HeartbeatSeconds, therefore failing / retrying fast when the worker dies (Heartbeat timeout) while you still allow your tasks to take longer to complete.
That said, it's not clear why you are using .waitForTaskToken with Lambda. Usually, you can just use the default Request Response integration pattern with Lambda. This uses the synchronous invoke mode for Lambda and will return the response back to you without you needing to integrate back with Step Functions in your Lambda code. Possibly you are reading these off of an SQS queue for concurrency control or something. But if not, just use Request Response.
I'm wondering if I'm doing something wrong, I expected MassTransit would automatically register ReceiveEndpoints in the EndpointConvention.
Sample code:
services.AddMassTransit(x =>
{
x.AddServiceBusMessageScheduler();
x.AddConsumersFromNamespaceContaining<MyNamespace.MyRequestConsumer>();
x.UsingAzureServiceBus((context, cfg) =>
{
// Load the connection string from the configuration.
cfg.Host(context.GetRequiredService<IConfiguration>().GetValue<string>("ServiceBus:ConnectionString"));
cfg.UseServiceBusMessageScheduler();
// Without this line I'm getting an error complaining about no endpoint convention for x could be found.
EndpointConvention.Map<MyRequest>(new Uri("queue:queue-name"));
cfg.ReceiveEndpoint("queue-name", e =>
{
e.MaxConcurrentCalls = 1;
e.ConfigureConsumer<MyRequestConsumer>(context);
});
cfg.ConfigureEndpoints(context);
});
});
I thought this line EndpointConvention.Map<MyRequest>(new Uri("queue:queue-name")); wouldn't be necessary to allow sending to the bus without specifing the queue name, or am I missing something?
await bus.Send<MyRequest>(new { ...});
The EndpointConvention is a convenience method that allows the use of Send without specifying the endpoint address. There is nothing in MassTransit that will automatically configured this because, frankly, I don't use it. And I don't think anyone else should either. That stated, people do use it for whatever reason.
First, think about the ramifications - if every message type was registered as an endpoint convention, what about messages that are published and consumed on multiple endpoints? That wouldn't work.
So, if you want to route messages by message type, MassTransit has a feature for that. It's called Publish and it works great.
But wait, it's a command, and commands should be Sent.
That is true, however, if you are in control of the application and you know that there is only one consumer in your code base that consumes the KickTheTiresAndLightTheFires message contract, publish is as good as send and you don't need to know the address!
No, seriously dude, I want to use Send!
Okay, fine, here are the details. When using ConfigureEndpoints(), MassTransit uses the IEndpointNameFormatter to generate the receive endpoint queue names based upon the types registered via AddConsumer, AddSagaStateMachine, etc. and that same interface can be used to register your own endpoint conventions if you want to use Send without specifying a destination address.
You are, of course, coupling the knowledge of your consumer and message types, but that's your call. You're already dealing with magic (by using Send without an explicit destination) so why not right?
string queueName = formatter.Consumer<T>()
Use that string for the message types in that consumer as a $"queue:{queueName}" address and register it on the EndpointConvention.
Or, you know, just use Publish.
I currently have one AWS Lambda function that is updating a DynamoDB table, and I need another Lambda function that needs to run after the data is updated. Is there any benefit to using a DynamoDB trigger in this case instead of invoking the second Lambda using the first one?
It looks like the programmatic invocation would give me more control over when the Lambda is called (ie. I could wait for several updates to occur before calling), and reading from a DynamoDB Stream costs money while simply invoking the Lambda does not.
So, is there a benefit to using a trigger here? Or would I be better off invoking the Lambda myself?
DynamoDB Stream seems to be the better practice because:
you delegate the responsibility of invoking the post-processor function from your writer-Lambda. Makes writer more simple (aka faster),
you simplify connecting new external writers to the same Table, otherwise you have to implement the logic to call post-processors in all of them as well,
you guarantee that all data is post-processed (even if somebody added a new item in the web-interface of DynamoDB. :)
moneywise, the execution time you will spend to send invoke() operation from writer Lambda will likely cover the costs of a stream.
unless you use DynamoDB transactions your data may still be not yet available for post-processor if you call him from writer too soon. If your business logic doesn't need transactions then using them just to cover this problem = extra time/cost.
P.S. You can batch from the DynamoDB stream of course out of the box with simple setting. You are not obliged to invoke post-processor for every write operation.
After the data is updated, you can publish a SQS message, then add a trigger to configure another function to read from Amazon SQS in the Lambda console, create an SQS trigger.
To create a trigger
Open the Lambda console Functions page.
Choose a function.
Under Designer, choose Add trigger.
Choose a trigger type.
Configure the required options and then choose Add.
Lambda supports the following options for Amazon SQS event sources.
Event Source Options
SQS queue – The Amazon SQS queue to read records from.
Batch size – The number of items to read from the queue in each batch, up to 10. The event may contain fewer items if the batch that Lambda read from the queue had fewer items.
Enabled – Disable the event source to stop processing items.
var QUEUE_URL = 'https://sqs.us-east-1.amazonaws.com/{AWS_ACCUOUNT_}/matsuoy-lambda';
var AWS = require('aws-sdk');
var sqs = new AWS.SQS({region : 'us-east-1'});
exports.handler = function(event, context) {
var params = {
MessageBody: JSON.stringify(event),
QueueUrl: QUEUE_URL
};
sqs.sendMessage(params, function(err,data){
if(err) {
console.log('error:',"Fail Send Message" + err);
context.done('error', "ERROR Put SQS"); // ERROR with message
}else{
console.log('data:',data.MessageId);
context.done(null,''); // SUCCESS
}
});
}
Please don't forget add a trigger from another function to this SQS topic. That function will receive the SQS message automatic to handle.
Does the Azure Service Bus Subscription client support the ability to use OnMessage Action when the subscription requires a session?
I have a subscription, called "TestSubscription". It requires a sessionId and contains multipart data that is tied together by a SessionId.
if (!namespaceManager.SubscriptionExists("TestTopic", "Export"))
{
var testRule = new RuleDescription
{
Filter = new SqlFilter(#"(Action='Export')"),
Name = "Export"
};
var subDesc = new SubscriptionDescription("DataCollectionTopic", "Export")
{
RequiresSession = true
};
namespaceManager.CreateSubscription(sub`enter code here`Desc, testRule);
}
In a seperate project, I have a Service Bus Monitor and WorkerRole, and in the Worker Role, I have a SubscriptionClient, called "testSubscriptionClient":
testSubscriptionClient = SubscriptionClient.CreateFromConnectionString(connectionString, _topicName, CloudConfigurationManager.GetSetting("testSubscription"), ReceiveMode.PeekLock);
I would then like to have OnMessage triggered when new items are placed in the service bus queue:
testSubscriptionClient.OnMessage(PersistData);
However I get the following message when I run the code:
InvalidOperationException: It is not possible for an entity that requires sessions to create a non-sessionful message receiver
I am using Azure SDK v2.8.
Is what I am looking to do possible? Are there specific settings that I need to make in my service bus monitor, subscription client, or elsewhere that would let me retrieve messages from the subscription in this manner. As a side note, this approach works perfectly in other cases that I have in which I am not using sessioned data.
Can you try this code:
var messageSession=testSubscriptionClient.AcceptMessageSession();
messageSession.OnMessage(PersistData);
beside of this:
testSubscriptionClient.OnMessage(PersistData);
Edit:
Also, you can register your handler to handle sessions (RegisterSessionHandler). It will fire your handle every new action.
I think this is more suitable for your problem.
He shows both way, in this article. It's for queue, but I think you can apply this to topic also.