Is SmtpClient.SendAsync() really faster than SmtpClient.Send() - async-await

I was refactoring some of my code into a service and I was going to roll async all the way through down to my EmailService's Send() method.
I was about to replace Send() with SendAsync() and noticed the extra callback parameter.
Well decided to dig in and read about it a little more in depth here:
https://learn.microsoft.com/en-us/dotnet/api/system.net.mail.smtpclient.sendasync?view=netcore-3.1#definition
I think this would be useful to set up logging to database on an error:
if (e.Error != null)
{
Console.WriteLine("[{0}] {1}", token, e.Error.ToString());
// TODO: Log error to DB
}
e.Cancel would never happen.
The only thing I would be concerned about is logging errors and message sent.
Because the example console program says message sent even though if I have say a port wrong and the message doesn't go through.
The only time it would report the error is for an ArgumentNullException or InvalidOperationException.
So logging message sent could be erroneous.
But there is no way to check for sure if a message goes since it returns void and not a success bool. I guess this is better than putting the Send() in a try/catch which would be more expensive.
One alternative is to setup the callback to an empty SendCompletedCallback() and just have this:
private static void SendCompletedCallback(object sender, AsyncCompletedEventArgs e)
{
// Do nothing
}
Then we get the benefit of Async non blocking in our emails and have the infrastructure set up for a callback if we ever need it.
But we are not forced to add any funtionality at the moment.
Am I right in thinking this through here.
I think I am going with this approach.

I found the
SendMailAsync()
method works best.
You don't need a callback or a user token.
Easy to implement and non-blocking.

Related

Allow-listing IP addresses using `call.cancel()` from within `EventListener.dnsEnd()` in OkHttp

i am overriding the dnsEnd() function in EventListener:
#Override
public void dnsEnd(Call call, String domainName, List<InetAddress> inetAddressList) {
inetAddressList.forEach(address -> {
logger.debug("checking if url ({}) is in allowlist", address.toString());
if (!allowlist.contains(address)) {
call.cancel();
}
});
}
i know, in the documentation it says not to alter call parameters etc:
"All event methods must execute fast, without external locking, cannot throw exceptions, attempt to mutate the event parameters, or be re-entrant back into the client. Any IO - writing to files or network should be done asynchronously."
but, as i don't care about the call if it is trying to get to an address outside the allowlist, i fail to see the issue with this implementation.
I want to know if anyone has experience with this, and why it may be an issue?
I tested this and it seems to work fine.
This is fine and safe. Probably the strangest consequence of this is the canceled event will be triggered by the thread already processing the DNS event.
But cancelling is not the best way to constrain permitted IP addresses to a list. You can instead implement the Dns interface. Your implementation should delegate to Dns.SYSTEM and them filter its results to your allowlist. That way you don't have to worry about races on cancelation.

Using Observables to process queue messages which require a callback at end of processing?

This is a bit of a conceptual question, so let me know if it's off topic.
I'm looking at writing yet another library to process messages off a queue - in this case an Azure storage queue. It's pretty easy to create an observable and throw a message into it every time a message is available.
However, there's a snag here that I'm not sure how to handle. The issue is this: when you're done processing the message, you need to call an API on the storage queue to actually delete the message. Otherwise the visibility timeout will expire and the message will reappear to be dequeued again.
As an example, here's how this loop looks in C#:
public event EventHandler<string> OnMessage;
public void Run()
{
while(true)
{
// Read message
var message = queue.GetMessage();
if (message != null)
{
// Run any handlers
OnMessage?.Invoke(this, message.AsString);
// Delete off queue when done
queue.DeleteMessage(message);
}
else
{
Thread.Sleep(2500);
}
}
}
The important thing here is that we read the message, trigger any registered event handlers to do things, then delete the message after the handlers are done. I've omitted error handling here, but in general if the handler fails we should NOT delete the message, but instead let it return to visibility automatically and get redelivered later.
How do you handle this kind of thing using Rx? Ideally I'd like to expose the observable for anyone to subscribe to. But I need to do stuff at the end of processing for that message, whatever the "end" happens to mean here.
I can think of a couple of possible solutions, but I don't really like any of them. One would be to have the library call a function supplied by the consumer, that takes in the source observable, hooks up whatever it wants, then returns a new observable that the library can then subscribe on to do the final cleanup. But that's pretty limiting, as consumers basically only have one shot to hook up to the messages, which seems pretty limiting.
I guess I could put the call to delete the message after the call to onNext, but then I don't know if the processing succeeded or failed unless there's some sort of back channel in that api I don't know about?
Any ideas/suggestions/previous experience here?
Try having a play with this:
IObservable<int> source =
Observable
.Range(0, 3)
.Select(x =>
Observable
.Using(
() => Disposable.Create(() => Console.WriteLine($"Removing {x}")),
d => Observable.Return(x)))
.Merge();
source
.Subscribe(x => Console.WriteLine($"Processing {x}"));
It produces:
Processing 0
Removing 0
Processing 1
Removing 1
Processing 2
Removing 2

Azure ServiceBus TopicClient SendAsync implementation in own wrapper

what is the proper implementation of SendAsync method of Azure ServiceBus TopicClient?
In the second implementation, will the BrokeredMessage actually be disposed before the SendAsync happens?
public async Task SendAsync<TMessage>(TMessage message, IDictionary<string, object> properties = null)
{
using (var bm = MessagingHelper.CreateBrokeredMessage(message, properties))
{
await this._topicClient.Value.SendAsync(bm);
}
}
public Task SendAsync<TMessage>(TMessage message, IDictionary<string, object> properties = null)
{
using (var bm = MessagingHelper.CreateBrokeredMessage(message, properties))
{
return this._topicClient.Value.SendAsync(bm);
}
}
I would like to get most from await/async pattern.
Answer to your question: the second approach could cause issues with disposed objects, you have to wait ending of SendAsync execution before you can release resources.
Detailed explanation.
If you call await, execution of a method will be stopped at the same moment and will not continue till awaitable method is not returned. Brokered message will be stored in a local hidden variable and will not be disposed.
If you don't call await, execution will continue and all resources of brokered message will be freed before they are actually consumed (as using is calling Dispose on object at the end) or in the process of consumption. This definetely will lead to exceptions inside SendAsync. At this point, execution of SendAsync is actually started.
What await does is “pausing” any current thread and waits for completion of task and it's result. And that's what you actually need. Purpose of async-await is to allow execution of some task concurrently with something else, it provides ability to wait for a result of concurrent operation when it is really necessary and further execution isn't possible without it.
First approach is good if every method to the top is an async method too. I mean, if caller of your SendAsync is async Task, and caller of that caller and so on to the top calling method.
Also, consider exceptions that could raise, they are listed here. As you can see, there are so-called transient errors. This is a kind of errors that retry can possibly fix. In your code, there is no such exception handling. Example of retry pattern could be found here, but mentioned article on exceptions can suggest better solutions and it is a topic of another question. I would also add some logging system to at least be aware of any non transient exceptions.

Boost calling method from outside of class

Let's see how simple of a question I can ask. I have:
void TCPClient::test(const boost::system::error_code& ErrorCode)
{
// Anything can be here
}
and I would like to call it from another class. I have a global boost::thread_group that creates a thread
clientThreadGroup->create_thread(boost::bind(&TCPClient::test,client, /* this is where I need help */));
but am uncertain on how to call test, if this is even the correct way.
As an explanation for the overall project, I am creating a tcp connection between a client and a server and have a method "send" (in another class) that will be called when data needs to be sent. My current goal is to be able to call test (which currently has async_send in it) and send the information through the socket that is already set up when called. However, I am open to other ideas on how to implement and will probably work on creating a consumer/producer model if this proves to be too difficult.
I can use either for this project, but I will later have to implement listen to be able to receive control packets from the server later, so if there is any advice on which method to use, I would greatly appreciate it.
boost::system::error_code err;
clientThreadGroup->create_thread(boost::bind(&TCPClient::test,client, err));
This works for me. I don't know if it will actually have an error if something goes wrong, so if someone wants to correct me there, I would appreciate it (if just for the experience sake).

Writing to channel in a loop

I have to send a lot of data to I client connected to my server in small blocks.
So, I have something like:
for(;;) {
messageEvent.getChannel().write("Hello World");
}
The problem is that, for some reason, client is receiving dirty data, like Netty buffer is not clear at each iteration, so we got something like "Hello WorldHello".
If I make a little change in my code putting a thread sleep everything works fine:
for(;;) {
messageEvent.getChannel().write("Hello World");
Thread.sleep(1000);
}
As MRAB said, if the server is sending multiple messages on a channel without indicating the end of each message, then client can not always read the messages correctly. By adding sleep time after writing a message, will not solve the root cause of the problem either.
To fix this problem, have to mark the end of each message in a way that other party can identify, if client and server both are using Netty, you can add LengthFieldPrepender and LengthFieldBasedFrameDecoder before your json handlers.
String encodedMsg = new Gson().toJson(
sendToClient,newTypeToken<ArrayList<CoordinateVO>>() {}.getType());
By default, Gson uses html escaping for content, sometime this will lead to wired encoding, you can disable this if required by using a Gson factory
final static GsonBuilder gsonBuilder = new GsonBuilder().disableHtmlEscaping();
....
String encodedMsg = gsonBuilder.create().toJson(object);
In neither case are you sending anything to indicate where one item ends and the next begins, or how long each item is.
In the second case the sleep is getting the channel time out and flush, so the client sees a 'break', which it interprets as the end of the item.
The client should never see this "dirty data". If thats really the case then its a bug. But to be hornest I can't think of anything that could lead to this in netty. As every Channel.write(..) event will be added to a queue which then get written to the client when possible. So every data that is passed in the write(..) method will just get written. There is no "concat" of the data.
Do you maybe have some custom Encoder in the pipeline that buffers the data before sending it to the client ?
It would also help if you could show the complete code that gives this behavoir so we see what handlers are in the pipeline etc.

Resources