NLog async message order - thread-safety

I have an NLog config file with several async targets (<targets async="true">). One of the targets is a log file.
I know that NLOG is thread-safe, but does it guarantee that messages coming from the same thread are written to the log file in the order in which they were produced?

I asked the same question on the NLog forum, and got a replay from Kim Christensen, that this is indeed the case:
Yes logs written from the same thread should be written in the order
produced.
Thanks Kim!

You can follow This question which has some explaination about thread safety.

Related

JobRunr Job Processing at least once or exactly once?

We want to use JobRunr along with Spring Boot and i am looking at the documentation and it is kinda confusing.
On the main page it says the following thing
Reliable
Once a background job was created without any exception,
JobRunr takes the responsibility to process it at least once.
And in the FAQ page https://www.jobrunr.io/en/documentation/faq/ it says
How does JobRunr make sure to only process a job once?
I guess what is written in the FAQ it means that it uses optimistic locking to do the coordination that the job is processed once - but this does not mean it will get processed once exactly - because it might get processed, but not updated in the DB - which means double processing can occur.
Am i getting it correct?
Also from the FAQ i can't see what happens when the status is updated to PROCESSING but the actual processing fails. This is not explained there.
Thanks a lot for the feedback.
Best Regards
It seems that this has been answered already in the Discussion tab in Github.
If no exceptions during the run of your job, JobRunr
will process your job exactly once by means of optimistic locking.
If however your job is existing out of multiple phases and
one of those last phases fails, all the prior phases will
be re-executed when your job is retried.
https://github.com/jobrunr/jobrunr/discussions/358

Get logs while developing an Ansible Collection

I'm wondering when developing an Ansible Collection, is it possible to get arbitrary logs written to a log file/console?
This being a random print() statement to help debugging, or is the only way just to concatenate your final return message?
Thank you
Question:
Is it possible to get arbitrary logs written to a log file/console?
Answer:
Your question for me looks similar to Is it possible to print out debugging logs while task is running in Ansible?.
According the answer there citing from documentation
Ansible executes each module, usually on the remote managed node, and collects return values. ...
Ansible modules normally return a data structure that can be registered into a variable ...
so such live output is not implemented. There is an option for Debugging modules during development
To see what is actually happening in the module
but that might not fit all of your cases.
Question:
This being a random print() statement to help debugging
Answer:
According the Developer Guide » Debugging modules » Simple debugging
Since print() statements do not work inside modules, raising an exception is a good approach if you just want to see some specific data. Put raise Exception(some_value) somewhere in the module and run it normally. Ansible will handle this exception, pass the message back to the control node, and display it.

Spring Integration `RotatingServerAdvice` Polling

When RotatingServerAdvice is added as an advice to a Poller, as in
PollerSpec pollerSpec = Pollers.cron(cronExpression)
.advice(rotatingServerAdvice(sftpConfig, proxyConfig))
.maxMessagesPerPoll(3)
.errorChannel("errorChannel");
will the poller rotate through each RotationPolicy.KeyDirectory at the scheduled time, or will it check one directory per poll? I've checked the examples in the Spring Integration Github repo and the reference documentation but I'm not able to get clarity on this. I'm guessing it should be the first, but I'd like to confirm.
Please, clarify why do you see a difference between scheduled time an poll? The poll really happens only when scheduler comes to the task to perform.
There is a fair option for you to consider. See docs: https://docs.spring.io/spring-integration/docs/current/reference/html/ftp.html#ftp-rotating-server-advice

ServiceBase.OnShutdown and event logs in Windows .Net 3.5

I've written a custom service that overrides ServiceBase.OnShutdown().
Unfortunately, when I log to the event log, nothing is written.
My guess is that the Windows event log was shut down before my service.
Is there a way to order service shutdown so that my servce shuts down
before the event logger? I don't want to have to write out to a file.
Pl. advise. Thanks.
You could try to setup a dependency where your service depends on the Event logger, this is mostly done to make them load in the correct order but I assume that might make sure that your service always was stopped first as well.
As can be seen in this Technet article, you'd need to change the DependOnService value either using the Sc.exe tool or the ChangeServiceConfig API.
There is a way, but it is more or less a Reflection-Hack.
I added my solution to an other post: Here
Hope I could help.

multiple processes writing to a single log file

This is intended to be a lightweight generic solution, although the problem is currently with a IIS CGI application that needs to log the timeline of events (second resolution) for troubleshooting a situation where a later request ends up in the MySQL database BEFORE the earlier request!
So it boils down to a logging debug statements in a single text file.
I could write a service that manages a queue as suggested in this thread:
Issue writing to single file in Web service in .NET
but deploying the service on each machine is a pain
or I could use a global mutex, but this would require each instance to open and close the file for each write
or I could use a database which would handle this for me, but it doesnt make sense to use a database like MySQL to try to trouble shoot a timeline issue with itself. SQLite is another possability, but this thread
http://www.perlmonks.org/?node_id=672403
Suggests that it is not a good choice either.
I am really looking for a simple approach, something as blunt as writing to individual files for each process and consolidating them accasionally with a scheduled app. I do not want to over engineer this, nor spend a week implementing it. It is only needed occassionally.
Suggestions?
Try the simplest solution first - each write to the log opens and closes the file. If you experience problems with this, which you probably won't , look for another solution.
You can use file locking. Lock the file for writing, write the message, unlock.
My suggestion is to preserve performance then think in asynchronous logging. Why not send your data log info using UDP to service listening port and he write to log file.
I would also suggest some kind of a central logger that can be called by each process in an asynchronous way. If the communication is UDP or RPC or whatever would be an implementation detail.
Even thought it's an old post, has anyone got an idea why not using the following concept:
Creating/opening a file with share mode of FILE_SHARE_WRITE.
Having a named global mutex, and opening it.
Whenever a file write is desired, lock the mutex first, then write to the file.
Any input?

Resources