Algorithm for creating a AutoScheduling app - algorithm

I am a beginner in React and decided to start with this project.
Essentially I am trying to make an Auto scheduler, which takes tasks, which are essentially object with properties like due-date, importance, subject, topic etc.
So I am trying to develop an algorithm where it takes in an array of these task object and sorts them to fit the schedule. My problem is how do I go about creating a complex algorithm like this, and sort the list.
Lets also assume that I were to add global rules such as I dont do more than 8 hours of hw a day, or I dont do two hws for the same class in a row.
How can I go about developing such an Algorithm?
Here was my idea. (I havent implemented this yet) I essentially develop an equation, where I multiply all of task object properties with a constant, and give them a sorting number, and then sort the array by that number.
Finally I run a loop on this array, and make sure that the global rules are met.
If you have a better idea or solution please let me know. Thanks!

You probably want to throw your process up on the cloud, so you have the benefits of fault tolerance, parallel processing, etc., etc. Here is one option.
https://cloud.google.com/run/docs/triggering/using-scheduler
Console
Visit the Cloud Scheduler console page.
Go to Cloud Scheduler
Click Create job.
screenshot
Supply a name for the job.
Specify the frequency, or job interval, at which the job is to run, using a configuration string. For example, the string 0 */3 * * * runs the job every 3 hours. The string you supply here can be any crontab compatible string.
For more information, see Configuring Job Schedules.
From the dropdown list, choose the timezone to be used for the job frequency.
Specify HTTP as the target:
Specify the fully qualified URL of your service, for example https://myservice-abcdef-uc.a.run.app The job will send requests to this URL.
Specify the HTTP method: the method must match what your previously deployed Cloud Run service is expecting. The default is POST.
Optionally, specify the data to be sent to the target. This data is sent in the body of the request when either the POST or PUT HTTP method is selected.
Click More to show the auth settings.
From the dropdown menu, select Add OIDC token.
In the Service account field, copy the service account email of the service account you created previously.
In the Audience field, copy the full URL of your service.
Click Create to create and save the job.
Gcloud
Create the job:
gcloud beta scheduler jobs create http test-job --schedule "5 * * * *" \
--http-method=HTTP-METHOD \
--uri=SERVICE-URL \
--oidc-service-account-email=SERVICE-ACCOUNT-EMAIL \
--oidc-token-audience=SERVICE-URL
Replace
HTTP-METHOD with the HTTP method, eg, GET, POST, PUT, etc.
SERVICE-URL with your service URL.
SERVICE-ACCOUNT-EMAIL with your service account email.
Also, and this is kind of the old-fashioned way of doing it, but I grew up in the 80s and early 90s, so I'm still a fan of Windows Scheduler.
https://www.windowscentral.com/how-create-automated-task-using-task-scheduler-windows-10

Related

Laravel Schedule One-Off Events

Users are able to set up a marketing email send time within my app for random dates as they need them. It is crucial that the emails start to go out exactly when they are scheduled. So, the app needs to create something that fires one time for a specific group of emails at a specific date and time down to the minute. There will be many more dates for other sends in the future, but they are all distinct (so need something other than 'run every xxx date')
I could have a Scheduler task that runs every minute that looks at the dates of any pending tasks in the database and moves to the command that sends those that need sending. But I'm running a multi-tenanted app -- likely not overlap, but seems like a huge hit to multiple databases every minute for every tenant to search through potentially thousands of records per tenant.
What I need (I think) is a way to programmatically set a schedule for each send date, and then remove it. I liked this answer or perhaps using ->between($startMin, $endMin), without the every xxx instruction, or even using the cron function within Scheduler to set up one single time for each group that needs to be sent. Not sure I'm using this the way it was intended though?
Would this even work? I programmatically created a schedule from within a test method, and it was added to the schedule queue based on the dump of the $schedule I created, showing all schedules - but it did not show up via this method found from this answer:
$schedule = app()->make(\Illuminate\Console\Scheduling\Schedule::class);
$events = collect($schedule->events())->filter(function (\Illuminate\Console\Scheduling\Event $event) {
return stripos($event->command, 'YourCommandHere');
});
It also did not output anything, so I'm wondering if programmatically creating a schedule outside of Kernel.php is not the way to go.
Is there another way? I don't know scheduler well enough to know if these one-off schedules permanently remain somewhere, are they deleted after their intended single use, taking up memory, etc?
Really appreciate any help or guidance.

Is there any way to replay events in a date range?

I am implementing an example of spring-boot and axon. I have two events
(deposit and withdraw account balance). I want to know is there any way to get the state of the Account Aggregate by a given date ?
I want to get not just the final state, but to replay events in a range of dates.
I think I can help with this.
In the context of Axon Framework, you can start a replay of events by telling a given TrackingEventProcessor to 'reset' it's Tokens. By the way, the current description on this in the Reference Guide can be found here.
These TrackingTokens are the objects which know how far a given TrackingEventProcessor is in terms of handling events from the Event Stream. Thus resetting/adjusting these TrackingTokens is what will issue a Replay of events.
Knowing all these, the second step is to look at the methods the TrackingEventProcessor provides to 'reset tokens', which is threefold:
TrackingEventProcessor#resetTokens()
TrackingEventProcessor#resetTokens(Function<StreamableMessageSource, TrackingToken>)
TrackingEventProcessor#resetTokens(TrackingToken)
Option one will reset your tokens to the beginning of the event stream, which will thus replay everything.
Option two and three however give you the opportunity to provide a TrackingToken.
Thus, you could provide a TrackingToken starting from several points on the Event Stream. So, how do you go about to creating such a TrackingToken at a specific point in time? To that end, you should take a look at the StreamableMessageSource interface, which has the following operations:
StreamableMessageSource#createTailToken()
StreamableMessageSource#createHeadToken()
StreamableMessageSource#createTokenAt(Instant)
StreamableMessageSource#createTokenSince(Duration)
Option 1 is what's used to create a token at the start of the stream, whilst 2 will create a token at the head of the stream.
Option 3 and 4 will however allow you to create a token at a specific point in time, thus allowing you to replay all the events since the defined instance up to now.
There is one caveat in this scenario however. You're asking to replay an Aggregate. From Axon's perspective by default the Aggregate is the Command Model in a CQRS set up, thus dealing with Commands going in to your system. In the majority of the applications, you want Commands (e.g. the requests to change something) to occur on the current state of the application. As such, the Repository provided to retrieve an Aggregate does not allow specifying a point in time.
The above described solution in regards to replaying is thus solely tied to Query Model creation, as the TrackingEventProcessor is part of the Event Handling side in your application most often used to create views. This idea also ties in with your questions, that you want to know the "state of the Account Aggregate" at a given point in time. That's not a command, but a query, as you have 'a request for data' instead of 'the request to change state'.
Hope this helps you out #Safe!

Compensating Events on CQRS/ES Architecture

So, I'm working on a CQRS/ES project in which we are having some doubts about how to handle trivial problems that would be easy to handle in other architectures
My scenario is the following:
I have a customer CRUD REST API and each customer has unique document(number), so when I'm registering a new customer I have to verify if there is another customer with that document to avoid duplicity, but when it comes to a CQRS/ES architecture where we have eventual consistency, I found out that this kind of validations can be very hard to address.
It is important to notice that my problem is not across microservices, but between the command application and the query application of the same microservice.
Also we are using eventstore.
My current solution:
So what I do today is, in my command application, before saving the CustomerCreated event, I ask the query application (using PostgreSQL) if there is a customer with that document, and if not, I allow the event to go on. But that doesn't guarantee 100%, right? Because my query can be desynchronized, so I cannot trust it 100%. That's when my second validation kicks in, when my query application is processing the events and saving them to my PostgreSQL, I check again if there is a customer with that document and if there is, I reject that event and emit a compensating event to undo/cancel/inactivate the customer with the duplicated document, therefore finishing that customer stream on eventstore.
Altough this works, there are 2 things that bother me here, the first thing is my command application relying on the query application, so if my query application is down, my command is affected (today I just return false on my validation if query is down but still...) and second thing is, should a query/read model really be able to emit events? And if so, what is the correct way of doing it? Should the command have some kind of API for that? Or should the query emit the event directly to eventstore using some common shared library? And if I have more than one view/read? Which one should I choose to handle this?
Really hope someone could shine a light into these questions and help me this these matters.
For reference, you may want to be reviewing what Greg Young has written about Set Validation.
I ask the query application (using PostgreSQL) if there is a customer with that document, and if not, I allow the event to go on. But that doesn't guarantee 100%, right?
That's exactly right - your read model is stale copy, and may not have all of the information collected by the write model.
That's when my second validation kicks in, when my query application is processing the events and saving them to my PostgreSQL, I check again if there is a customer with that document and if there is, I reject that event and emit a compensating event to undo/cancel/inactivate the customer with the duplicated document, therefore finishing that customer stream on eventstore.
This spelling doesn't quite match the usual designs. The more common implementation is that, if we detect a problem when reading data, we send a command message to the write model, telling it to straighten things out.
This is commonly referred to as a process manager, but you can think of it as the automation of a human supervisor of the system. Conceptually, a process manager is an event sourced collection of messages to be sent to the command model.
You might also want to consider whether you are modeling your domain correctly. If documents are supposed to be unique, then maybe the command model should be using the document number as a key in the book of record, rather than using the customer. Or perhaps the document id should be a function of the customer data, rather than being an arbitrary input.
as far as I know, eventstore doesn't have transactions across different streams
Right - one of the things you really need to be thinking about in general is where your stream boundaries lie. If set validation has significant business value, then you really need to be thinking about getting the entire set into a single stream (or by finding a way to constrain uniqueness without using a set).
How should I send a command message to the write model? via API? via a message broker like Kafka?
That's plumbing; it doesn't really matter how you do it, so long as you are sure that the command runs within its own transaction/unit of work.
So what I do today is, in my command application, before saving the CustomerCreated event, I ask the query application (using PostgreSQL) if there is a customer with that document, and if not, I allow the event to go on. But that doesn't guarantee 100%, right? Because my query can be desynchronized, so I cannot trust it 100%.
No, you cannot safely rely on the query side, which is eventually consistent, to prevent the system to step into an invalid state.
You have two options:
You permit the system to enter in a temporary, pending state and then, eventually, you will bring it into a valid permanent state; for this you could allow the command to pass, yield CustomerRegistered event and using a Saga/Process manager you verify against a uniquely-indexed-by-document-collection and issue a compensating command (not event!), i.e. UnregisterCustomer.
Instead of sending a command, you create&start a Saga/Process that preallocates the document in a uniquely-indexed-by-document-collection and if successfully then send the RegisterCustomer command. You can model the Saga as an entity.
So, in both solution you use a Saga/Process manager. In order for the system to be resilient you should make sure that RegisterCustomer command is idempotent (so you can resend it if the Saga fails/is restarted)
You've butted up against a fairly common problem. I think the other answer by VoicOfUnreason is worth reading. I just wanted to make you aware of a few more options.
A simple approach I have used in the past is to create a lookup table. Your command tries to register the key in a unique constraint table. If it can reserve the key the command can go ahead.
Depending on the nature of the data and the domain you could let this 'problem' occur and raise additional events to mark it. If it is something that's important to the business/the way the application works then you can deal with it either manually or at the time via compensating commands. if the latter then it would make sense to use a process manager.
In some (rare) cases where speed/capacity is less of an issue then you could consider old-fashioned locking and transactions. Admittedly these are much better suited to CRUD style implementations but they can be used in CQRS/ES.
I have more detail on this in my blog post: How to Handle Set Based Consistency Validation in CQRS
I hope you find it helpful.

display order or sorting of typed parameters prompted for review at run of custom build

Problem:
Our build configuration to deploy requires a small number of typed parameters to allow for exclusion/inclusion of some service deploys. The parameters are set to prompt for review and the build is triggered manually from teamcity's button to run custom build.
I haven't yet found any documentation for (or hackish example of) ordering or sorting rules that TeamCity uses to display those typed parameters.
As a quick sketch of an example, we're hoping to display this:
1. Stop service X
2. Start service X
3. Stop service Y
4. Start service Y
Or:
1. Stop service X
2. Stop service Y
3. Start service X
4. Start service Y
Note: the actual order of the build steps is fine and is not part of the objective here. We don't need to re-order those; I'm hoping to avoid user error by keeping either the services grouped together or the choices grouped together.
It almost seems that the run custom build's dialog is ordered by the internal id (or creation time) of each parameter.
We're not using TeamCity's internal database but rather a MySql installation on the same host; we're open to the option of re-ordering the parameters directly in the database if necessary.
Is there another way to influence the sorting or display order of these parameters when prompting the user for their review?
I would suggest one of these approaches:
Removing parameters at all and creating separate build for specific action: the builds are sorted alphabetically, so you can order then how you want. Besides that you could trigger each build automatically without worrying about selecting some parameters, you will see who and when did specific action for specific service (when you have a single build you have to take a look at parameters or logs to get this information).
If you need parameters and would like to select them, then most obvious choice is Select box from Typed Parameters. You can change the order in build configuration and that should automatically make correct order in the UI
You can try my plugin for dynamic select parameters -- this way you can control order of the parameters from remote service.

Why is infinite loop protection being triggered on my CRM workflow?

Update
I should have added from the outset - this is in Microsoft Dynamics CRM 2011
I know CRM well, but I'm at a loss to explain behaviour on my current deployment.
Please read the outline of my scenario to help me understand which of my presumptions / understandings is wrong (and therefore what is causing this error). It's not consistent with my expectations.
Basic Scenario
Requirement demands that a web service is called every X minutes (it adds pending items to a database index)
I've opted to use a workflow / custom entity trigger model (i.e. I have a custom entity which has a CREATE plugin registered. The plugin executes my logic. An accompanying workflow is started when "completed" time + [timeout period] expires. On expiry, it creates a new trigger record and the workflow ends).
The plugin logic works just fine. The workflow concept works fine to a point, but after a period of time the workflow stalls with a failure:
This workflow job was canceled because the workflow that started it included an infinite loop. Correct the workflow logic and try again. For information about workflow logic, see Help.
So in a nutshell - standard infinite loop detection. I understand the concept and why it exists.
Specific deployment
Firstly, I think it's quite safe for us to ignore the content of the plugin code in this scenario. It works fine, it's atomic and hardly touches CRM (to be clear, it is a pre-event plugin which runs the remote web service, awaits a response and then sets the "completed on" date/time attribute on my Trigger record before passing the Target entity back into the pipeline) . So long as a Trigger record is created, this code runs and does what it should.
Having discounted the content of the plugin, there might be an issue that I don't appreciate in having the plugin registered on the pre-create step of the entity...
So that leaves the workflow itself. It's a simple one. It runs thusly:
On creation of a new Trigger entity...
it has a Timeout of Trigger.new_completedon + 15 minutes
on timeout, it creates a new Trigger record (with no "completed on" value - this is set by the plugin remember)
That's all - no explicit "end workflow" (though I've just added one now and will set it testing...)
With this set-up, I manually create a new Trigger record and the process spins nicely into action. Roll forwards 1h 58 mins (based on the last cycle I ran - remembering that my plugin code may take a minute to finish running), after 7 successful execution cycles (i.e. new workflow jobs being created and completed), the 8th one fails with the aforementioned error.
What I already know (correct me where I'm wrong)
Recursion depth, by default, is set to 8. If a workflow / plugin calls itself 8 times then an infinite loop is detected.
Recursion depth is reset every one hour (or 10 minutes - see "Warnings" in linked blog?)
Recursion depth settings can be set via PowerShell or SDK code using the Deployment Web Service in an on-premise deployment only (via the Set-CrmSetting Cmdlet)
What I don't want to hear (please)
"Change recursion depth settings"
I cannot change the Deployment recursion depth settings as this is not an option in an online scenario - ultimately I will be deploying to CRM Online too.
"Increase the timeout period on your workflow"
This is not an option either - the reindex needs to occur every 15 minutes, ideally sooner.
Update
#Boone suggested below that the recursion depth timeout is reset after 60 minutes of inactivity rather than every 60 minutes. Therein lies the first misunderstanding.
While discussing with #alex, I suggested that there may be some persistence of CorrelationId between creating an entity via the workflow and the workflow that ultimates gets spawned... Well there is. The CorrelationId is the same in both the plugin and the workflow and any records that spool from that thread. I am now looking at ways to decouple the CorrelationId (or perhaps the creation of records) from the entity and the workflow.
For the one hour "reset" to take place you have to have NO activity for an hour. It doesn't reset just 1 hour from the original. So since you have an activity every 15 minutes, it never has a chance to reset. I don't know that is said in stone anywhere... but from my experience.
In CRM 4 it was possible to create a CRM Service (Google creating a CRM service in the child pipeline) and reset the correlation ID (using CorrelationToken.NewToken()). I don't see anything so easy in the 2011 SDK. No idea if this trick worked in the online environment. Is 2011 online backwards compatible with CRM 4 plug-ins?
One thing you could try would be to use the IExecutionContext.CorrelationId to scavenge the asyncoperation (System Job) table. But according to the metadata, the attribute I think might be useful (CorrelationId, CorrelationUpdatedTime, Depth) are NOT valid for update. Maybe you could delete the rows? Even that may not help.
I doubt this can be solved like this.
I'd suggest a different approach: deploy a simple application alongside CRM and let it call the web service, which in turn can use the XRM endpoints in order to change the records.
UPDATE
Or, you can try something like this upon your crm service initialization in the plugin (dug it up from one of my plugins) leaving your workflow untouched:
CrmService service = new CrmService();
//initialize service here, then...
CorrelationToken newtoken = new CorrelationToken();
newtoken.CorrelationId = context.CorrelationId;
newtoken.CorrelationUpdatedTime = context.CorrelationUpdatedTime;
// WILD GUESS: Enforce unlimited depth ?
corToken.Depth = 0; // THIS WAS: context.Depth;
//updating correlation token
service.CorrelationTokenValue = corToken;
I admit I don't really remember much about this (code dates back to about 2 years ago), but it might help.

Resources