Quartz.net cron triggers for certain timezones not working - masstransit

I'm using masstransit.quarts library with .NET Core to schedule some jobs that would fall under different timezones. What I'm observing is, the scheduled jobs get triggered for certain timezone and not for the others.
Following is how I use the masstransit library to schedule the job:
public async Task ScheduleRecurringMessage<T>(string destinationAddress, string scheduleId, string scheduledGroupId, string timeZoneId, DateTimeOffset startTime, DateTimeOffset? endTime, string cronExpression, T message) where T : class
{
var destinationUri = string.Format("rabbitmq://{0}/{1}.{2}", eventbusConfig.Value.EventBusUri, eventbusConfig.Value.EventBusEndpointName, destinationAddress);
var recurringSchedule = new MasstransitRecurringSchedule(timeZoneId, startTime, endTime, scheduleId, scheduledGroupId, cronExpression);
await sendEndpoint.ScheduleRecurringSend<T>(new Uri(destinationUri), recurringSchedule, message);
}
However, when I check the [QRTZ_CRON_TRIGGERS] table, I can see that all scheduled jobs get correctly recorded here, against the correct timezone. However, UTC- timezones (e.g. EST), the scheduler doesnt trigger the event to Rabbitmq.
Can anyone help me identity why this is happening?

Related

Schedule a method dinamically using cron of the annotation #Scheduled

I would Like to schedule a method using The annotation #Scheduled using cron, For example I want that the method should be executed everyday in the time specified by the client.
So I would like to get the cron value from the DB, in order to give the client the possibility of executing the method whenever he wants.
Here is my method, it sends emails automatically at 10:00 am to the given addresses, so my goal is to make the 10:00 dynamic.
Thanks for your help.
#Scheduled(cron = "0 00 10* * ?")
public void periodicNotification() {
JavaMailSenderImpl jms = (JavaMailSenderImpl) sender;
MimeMessage message = jms.createMimeMessage();
MimeMessageHelper helper;
try {
helper = new MimeMessageHelper(message, MimeMessageHelper.MULTIPART_MODE_MIXED_RELATED, StandardCharsets.UTF_8.name());
List<EmailNotification> emailNotifs = enr.findAll();
for (EmailNotification i : emailNotifs)
{
helper.setFrom("smsender4#gmail.com");
List<String> recipients = fileRepo.findWantedEmails(i.getDaysNum());
//List<String> emails = recipientsRepository.getScheduledEmails();
String[] to = recipients.stream().toArray(String[]::new);
helper.setTo(to);
helper.setText(i.getMessage());
helper.setSubject(i.getSubject());
sender.send(message);
System.out.println("Email successfully sent to: " + Arrays.toString(to));
}
}
catch (MessagingException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
So I'm thinking at the next solution. ( + using the answer accepted here )
Let's say you have a class that imlpements Runnable interface -> this will be your job that gets executed. Let's call it MyJob
Also assume that we have a map that hold the id of the job and it's execution reference ( you'll see in a sec what i'm talking about). Call it something like currentExecutingJobs
Assume you have an endpoint that gets the name of the job and a cron expression from the client
When that endpoints gets called:
You'll look in the map above to see if there is any entry with that job id. If it exists, you cancel the job.
After that, you'll create an instance of that job ( You can do that by using reflection and having a custom annotation on your job classes in which you can provide an id. For example #MyJob("myCustomJobId" )
And from the link provided, you'll schedule the job using
// Schedule a task with the given cron expression
ScheduledFuture myJobScheduledFutere = executor.schedule(myJob, new CronTrigger(cronExpression));
And put the result in the above map currentExecutingJobs.put("myCustomJobId", myJobScheduledFutere)
ScheduledFuture docs
In case you want to read property from database you can implement the EnvironmentPostProcessor and read the necessary values from DB and add it to Environment object, more details available at https://docs.spring.io/spring-boot/docs/current/reference/html/howto.html#howto-spring-boot-application

ConsoleLogger writing logs out of order in aws lambda with 3.1

We have an AWS lambda with .net core 3.1, we use dependency injection to add some services, one of those services is a ConsoleLogger, we inject the logger like this:
private void ConfigureServices(IServiceCollection services)
{
this.Configuration = new ConfigurationBuilder().AddEnvironmentVariables().Build();
services.AddOptions();
services.AddLogging(builder =>
{
builder.AddConsole((x) =>
{
x.DisableColors = true;
x.Format = Microsoft.Extensions.Logging.Console.ConsoleLoggerFormat.Systemd;
});
});
// more services
}
Then in the function we use the logger like this:
[LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))]
public async Task Handle(ILambdaContext lambdaContext)
{
var logger = this.ServiceProvider.GetService<ILogger<MyClass>>();
string startTime = DateTime.UtcNow.ToString("yyyy-MM-dd HH:mm:ss.fff", CultureInfo.InvariantCulture);
logger.LogInformation($"Start Time stamp:{startTime}|AwsRequestId:{lambdaContext.AwsRequestId}");
// more work
logger.LogInformation("processing x");
// more work
string endTime = DateTime.UtcNow.ToString("yyyy-MM-dd HH:mm:ss.fff", CultureInfo.InvariantCulture);
logger.LogInformation($"End Time stamp:{endTime}|AwsRequestId:{lambdaContext.AwsRequestId}");
}
The problem is that in cloudwatch the logs are out of order
Even the report of the cost is before my entry.
Is there a way to avoid this?
Thanks
ConsoleLogger buffers messages in an internal queue, so they're probably getting delayed there, and it's nothing to do with CloudWatch. Amazon's own CloudWatch logging library does the same thing, and they note in their own documentation that it can be a problem for Lambdas: https://github.com/aws/aws-logging-dotnet/#aws-lambda
Their recommended solution is to use Amazon.Lambda.Logging.AspNetCore which doesn't do any buffering.
No, I don't believe you can do this with CloudWatch. CloudWatch guarantees delivery, not timely delivery. You could set up a Dynamo or ElasticSearch database and write your log messages to the database with a timestamp. On retrieval you can sort by the timestamp. This also gives you more control over filtering the messages than is possible with CloudWatch.

Prometheus + Micrometer: how to record time intervals and success/failure rates

I am sending from a front-end client to a metrics-microservice a JSON with the following data:
{
totalTimeOnTheNetwork: number;
timeElasticsearch: number;
isSuccessful: boolean;
}
The metrics-microservice currently handles the data like this:
#AllArgsConstructor
#Service
public class ClientMetricsService {
#Autowired
MeterRegistry registry; // abstract class, SimpleMeterRegistry gets injected
public void metrics(final MetricsProperty metrics) {
final long networkTime = metrics.getTotalTime() - metrics.getElasticTime();
registry.timer(ELASTIC_TIME_LABEL).record(metrics.getElasticTime(), TimeUnit.MILLISECONDS);
registry.timer(TOTAL_TIME_LABEL).record(metrics.getTotalTime(), TimeUnit.MILLISECONDS);
registry.timer(NETWORK_TIME_LABEL).record(networkTime, TimeUnit.MILLISECONDS);
}
}
As you can see I make a new metric for each of the time intervals. I was wondering if I can put all the intervals into one metric? It would be great if I did not have to calculate network-time on the metrics-microservice but rather in Grafana.
Also, could I put a success/failure tag inside the registry.timer? I assume I need to use a timer.builder on every request then like this:
Timer timer = Timer
.builder("my.timer")
.description("a description of what this timer does") // optional
.tags("region", "test") // optional
.register(registry);
Is that a typical way to do it (eg create a new timer on every HTTP request and link it to the registry) or should the timer be derived from the MeterRegistry like in my current version?
Or would you use another metric for logging success/failure? In the future instead of a boolean, the metric might change to a http-error-code for example, so I am not sure how to implement it in a maintainable way
Timer timer = Timer
.builder("your-timer-name-here")
.tags("ResponseStatus", isSuccessful.toString, "ResponseCode", http-error-code.toString)
.register(registry);
timer.record(metrics.getTotalTime);
Should be working code that responds to your question but I have a feeling there is a misunderstanding. Why do you want everything in one metric?
Either way you can probably sort that out with tags. I do not know the capabilities on the Grafana end but it might be as simple as throwing the .getElasticTime info into another tag and sending it through.

DDD, difference between a Saga and an Event Dispatcher?

On multiple sites (e.g. here or here Sagas are described as a mechanism that listens to domain events and reacts to them, executing new commands, and finally modifying the domain, etc.
Is there any difference between a Saga and a simple event dispatcher, where you have some subscribers react to events?
A "saga" maintains process state. A more accurate term is a process manager. The term "saga" was popularised by NServiceBus which is why many people nowadays refer to it as a "NServiceBus saga". A true saga is a database concept.
Anyway, since an event dispatcher has no interest in process state it is not a process manager. A service bus, as you noted, can also act as an event dispatcher, typically to other systems, although a service bus handles a whole lot more.
There are ways to deal with process state without making use of a saga, e.g.: routing slips and "choreography". Process managers are more of an "orchestration" mechanism.
Process managers can make your life a whole lot simpler so it does a bit more than an event dispatcher.
Essentially your subscriber(s) will interact with your process manager to effect any changes related to the process.
You may be thinking that this is a bit like workflow and you will be correct. However, a workflow engine is quite a heavy affair whereas a process manager should be a first class citizen in your DDD world :)
Process Manager Example
The following is just a quick, off the top of my head, and broad sample. Initially the data to create a member is stored as state in the process manager. Only once the e-mail address has been verified is the actual member created and stored with its valid e-mail address.
Then a welcome e-mail is sent, perhaps using a service bus. Once the response from the EMailService endpoint is received that the mail has been successfullly sent does that handler instruct the process manager that the e-mail has been sent and then completes the process manager.
So there would be a MemberRegistrationProcessRepository. Completing a process may result in it being archived or even deleted if it is really no longer required.
I have a suspicion that event sourcing will lend itself nicely to process managers but to keep the sample simple I have put together the following based on what I have previously implemented myself.
What I have also done previously is to keep track of the status changes and we had an SLA of 15 minutes per status. This was monitored and all process managers sitting on a status for more than 15 minutes would be reported to the core operational team to investigate.
In C# one could have something like this:
public class MemberRegistrationProcess
{
public Guid ProcessId { get; private set; }
public string Name { get; private set; }
public EMailAddress EMailAddress { get; private set; }
public string Status { get; private set; }
public static MemberRegistrationProcess Create(string name, EMailAddress eMailAddress)
{
return new MemberRegistrationProcess(Guid.NewGuid(), name, eMailAddress, "Started");
}
public MemberRegistrationProcess(Guid processId, string name, EMailAddress eMailAddress, string status)
{
ProcessId = processId;
Name = name;
EMailAddress = eMailAddress;
Status = status;
}
public void EMailAddressVerified(IMemberRepository memberRepository)
{
if (!Status.Equals("Started"))
{
throw new InvalidOperationException("Can only verify e-mail address if in 'started' state.");
}
memberRepository.Add(new Member(Name, EMailAddress));
Status = "EMailAddressVerififed";
}
public void WelcomeEMailSent()
{
if (!Status.Equals("EMailAddressVerififed"))
{
throw new InvalidOperationException("Can only set welcome e-mail sent if in 'EMailAddressVerififed' state.");
}
Status = "WelcomeEMailSent";
}
public void Complete(Member member)
{
if (!Status.Equals("WelcomeEMailSent"))
{
throw new InvalidOperationException("Can only complete in 'WelcomeEMailSent' state.");
}
member.Activate();
Status = "Complete";
}
}
A Saga is a long running process that triggers by events outside the domain. That events could happen in seconds, minutes or days.
The difference with simple event bus is that a Saga keeps a state machine that can be persisted to handle long running process in a "disconnected" workflow due to the external events.
The easiest way to understand it is a real life example, the classic "We sent you a confirmation e-mail to finish your registration in our awesome forum" should work:
Example with NServiceBus:
// data to be persisted to start and resume Saga when needed
public class UserRegistrationSagaData : ISagaEntity
{
public Guid Id { get; set; }
public string Originator { get; set; }
public string OriginalMessageId { get; set; }
public string Email { get; set; }
public int Ticket { get; set; }
}
// the saga itself
public class UserRegistrationSaga :
Saga<UserRegistrationSagaData>,
// tell NServiceBus the Saga is created when RequestRegistration message arrives
ISagaStartedBy<RequestRegistration>,
// tell NServiceBus the Saga is resumed when ConfirmRegistration message arrives
// (user click in the link inside the e-mail)
IMessageHandler<ConfirmRegistration>
{
public override void ConfigureHowToFindSaga() //primary keys of this saga in persistence
{
ConfigureMapping<RequestRegistration>(saga => saga.Email, message => message.Email);
ConfigureMapping<ConfirmRegistration>(saga => saga.Ticket, message => message.Ticket);
}
// when requestRegistrarion arrives this code is executed
public void Handle(RequestRegistration message)
{
// generate new ticket if it has not been generated
if (Data.Ticket == 0)
{
Data.Ticket = NewUserService.CreateTicket();
}
Data.Email = message.Email;
MailSender.Send(message.Email,
"Your registration request",
"Please go to /registration/confirm and enter the following ticket: " + Data.Ticket);
Console.WriteLine("New registration request for email {0} - ticket is {1}", Data.Email, Data.Ticket);
}
// when ConfirmRegistration arrives this code is executed
public void Handle(ConfirmRegistration message)
{
Console.WriteLine("Confirming email {0}", Data.Email);
NewUserService.CreateNewUserAccount(Data.Email);
MailSender.Send(Data.Email,
"Your registration request",
"Your email has been confirmed, and your user account has been created");
// tell NServiceBus that this saga can be cleaned up afterwards
MarkAsComplete();
}
}
}
A simple
Bus.Send(new RequestRegistration(...))
by a i.e. web controller should do the work.
Hard-coding this behavior with a simple event bus will require you to simulate a state machine in your domain in a ugly way; i.e. to add a boolean field "confirmed" in your users table in domain persistence and having to query and work with "confirmed = true" users in users management module of your system. Or having a table of "pending of confirmation users" in your persistence domain. I think you will get the idea.
So, a Saga is like a simple event bus that helps you to not pollute your domain and domain persistence with a state machine because the "disconnected" long running process. This is just responsibility segregation in good OO design.
That is a good question because it is confusing to distinguish between these concepts. And I agree with the answers that stated that the saga is a business flow.
And because sagas can span across multiple bounded contexts, therefore multiple microservices or modules, then they can be implemented in two ways:
Event orchestration
Event Choreography
Event orchestration is a kind of a process manager or a flow orchestrator, which is a central component that is needed to orchestrate the whole business flow. So it will create the saga, then coordinate the entire flow across multiple microservices or modules, then end the saga.
Event choreography is much simpler and can be done by the saga participants emit and subscribe to events. That can be done by event bus, dispatchers, and subscribers.
So the saga itself can be implemented with event dispatchers and subscribers. The difference is with the saga, The emitted/subscribed to events should make sense in the business flow of the saga itself.
I hope I made things simpler :D

springs scheduing jobs for a given

Spring MVC:
how to schedule jobs at a specific time in a day. schedule time differs every day. time when these jobs needs to be run are available in a database table. I was able to read the data from table but not sure how to schedule them in spring mvc. can someone help.
The Spring scheduler requires that you know the time of day at compile time so this is going to get a little weird. But if you want to be creative you can schedule a job at midnight to query the database for the exact time the task should run, sleep until that time, and then execute the task. Something like this:
public abstract class DailyTaskRunner {
// Execute the specific task here
protected abstract void executeTask();
// Query the database here
// Return the number of milliseconds from midnight til the task should start
protected abstract long getMillisTilTaskStart();
// Run at midnight every day
#Scheduled(cron="0 0 * * *")
public void scheduledTask() {
long sleepMillis = getMillisTilTaskStart();
try {
Thread.sleep(sleepMillis);
} catch(InterruptedException ex) {
// Handle error
}
executeTask();
}
}
You can extend this class once for every job.

Resources