Creating Cron Jobs in C# - windows

I am writing a Scheduling type application in C#, and allowing the user to store tasks that they want to run at certain times. Right now I give them the option of specifying how often to run it (Daily/Weekly/Monthly) as well as specify a time, which is then stored in a Database.
I am having a little bit of trouble just wrapping my head around the pseudo code behind this, and am looking for some suggestions about how to implement it. I am running a repeating timer every 60 seconds to check each task to see if it needs to run, but always seem to hit road blocks when I need to work with Date/Time and adding Recurring Day (daily/weekly/etc) has complicated it even more.

Related

Cypress - how to describe a multiple step test

This is probably an anti-pattern, but I'm new to e2e testing and I'm not sure there's a way around my requirements.
I have the need to test a scenario in my system that is several steps long (maybe 50-60 steps). This means going to about 15-20 pages, clicking and entering various details, and seeing the results. The reason the requirement is so long is that the system processes orders from creating an order, creating line items with various details, and running it through a long production process. My most valuable test would be to run through the whole process and verify the results.
I can't find a good way of isolating, say, steps 40-42 to verify that this process alone works well, because to get an order in that state I'd have to run through the first 39 steps.
Is there a good way to write tests to cover this scenario?

executePackage seems to take a long time to launch subpackage

I am a relative beginner at SSIS so I may be doing something silly.
I have a process that involves looping over a heterogenous queue and processing the objects 1 at a time. The process is currently being done in 'set logic' and its dropping stuff. I was asked to rework it in a looping manner, so that decision has been made for me.
I have chosen to implement queue logic in 1 package and the actual processing in another package.
This is all going relatively well considering...
I now have the process up and running, but its slow. 9 seconds per item. Clearly I cant present this solution. :-)
One thing i notice, 1.5 - 2 seconds of each loop are on the ExecutePackage Task in the queue loop.
I cant figure out how to get a hard number, I am using the flashing green box method of performance tuning. The other steps seem to be very fast. Adding indexes, changing sql to sps, all the usual tricks have helped.
Is the UI realiable at all with regards to boxes turning white/yellow/green? Some tasks report times in the progress tab, some dont seem to. So I am counting yellow time.
Should calling a subpackage be that expensive? 1 change i made was I change 'RunInASeparateProcess' to FALSE. I did that because the subpackage produces the following message otherwise:
Error: 0xC0012024 at Script Task: The task "Script Task" cannot run on this edition of Integration Services. It requires a higher level edition.
Task failed: Script Task
The reading i have done seems to advocate multiple packages. Anyone have any counter patterns? Should i stay the course? I started changing to 1 package. Copy/paste doesnt seem to work well w/ SequenceContainers. I would also need to recreate all the variables in the parent package. Doable, but im not sure that is the answer.
Does anyone know of any tuning resources/websites/books they would be willing to share.
Update - I have been tearing things down in an effort to figure out what the problem is. I was thinking it was the package configurations passing variable values. I dont think that is it. I can pass variables to another package w/ nothing in it and it is fast.
I can make the trivial subpackage slow by adding the two connection managers to it.
I suddenly realize I may be making and breaking a connection to both an Oracle Server and a SQL server in both the main package and then the sub package.
Am I correct in this observation?
Is there any way I can reuse the connection between the two packages?
When i google it, most of what i see is suggestions for passing the connection string.
UPDATE - I combined the two packages into one. This performance is not about 1.25 seconds per item, down from about 9. the only thing i can point to that changed is i am now reusing a single connection instead of making multiple connections.
Thanks, I appreciate any help you are kind enough to offer.
Greg
Once you enable logging, I'd suggest running the package from a command window using dtexec. While that doesn't perfectly duplicate the server environment, it does have the advantages of (a) eliminating BIDS as a potential performance issue and (b) being something you can do without jumping through change control hoops.

Is there a way to have OptionalStep timeout quickly in QTP?

In my automated test I have an area that occasionally shows up (and needs to be clicked on when it does show up). This is the perfect place to use an OptionalStep prefix, to prevent the step from failing if the optional area never shows up.
Thing is, I would like the OptionalStep to only wait a second or two before moving on to the rest of the test. Just as I can have object.Exist(2) only wait for 2 seconds, is there a way to have OptionalStep wait for only a couple of seconds?
Some other caveats:
I'd like to keep this as one small line. I know I could create a
multi-line logic test that uses object.Exist(2) inside an If/Then
statement, but I'd rather have the code be small and trim.
I don't want to change the global 20 second timeout just for this one
step.
Since this optional step only shows up in one specific area, it seems
like Recovery Scenarios would not be a good choice to have running
throughout the entire test.
Vitaly's comment would be a good solution as you are possibly unnecessarily over complicating your test.
Also having such a long global timeout is not recommended and should be as low as possible. I usually have it set at around 3 seconds and deal with the synchronisation in the code.
Anything that takes a long period of time should be known about upfront and dealt with in the code. Having a global timeout for everything will cause your test to run unnecessarily slow when most object cannot be found errors occur.

Using runt to do recurring non-weekly events in ruby (bi-weekly, every 3 weeks, etc)

I need to be able to create recurring events that happen on specific days but don't necessarily happen every week. They could be scheduled bi-weekly, every 3 weeks, etc. There is a current implementation that needs an update and I'd like to use the temporal expressions stuff from runt to redo it.
Runt will work for what I need except it doesn't seem to handle the intervals for non-weekly events. It adds some complexity because the event also needs to capture a start date so you can accurately compute which weeks to fire the events and which to ignore them. I think I can rework runt to do this, but I'd rather not reinvent the wheel if somebody has already tackled it, or there is a better solution out there. Any suggestions?
You aren't clear, are you running a script continiously to do this? If so why not use something like "at".
If this is a scheduling application have you looked at:
http://icalendar.rubyforge.org/
I've decided to build what I needed into runt. I've got the initial support already in (in the way of a REWeekWithIntervalTE class that takes a start date, interval, and weekday or array of weekdays). If anybody is interested in playing with it you can check out my fork. Sorry for not being more clear in my initial question about it being a scheduling issue.

Problems with running an application under controlled environment (Win32)

I'm not exactly sure how to tag this question or how to write the title, so if anyone has a better idea, please edit it
Here's the deal:
Some time ago I had written a little but cruicial part of a computing olympiad management system. The system's job is to get submissions from participants (code files), compile them, run them against predefined test cases, and return results. Plus all the rest of the stuff you can imagine it should do.
The part I had written was called Limiter. It was a little program whose job was to take another program and run it in a controlled environment. Controlled in this case means limitations on available memory, computing time and access to system resources. Plus if the program crashes I should be able to determine the type of the exception and report that to the user. Also, when the process terminated, it should be noted how long it executed (with a resolution of at least 0.01 seconds, better more).
Of course, the ideal solution to this would be virtualization, but I'm not that experienced to write that.
My solution to this was split into three parts.
The simplest part was the access to system resources. The program would simply be executed with limited access tokens. I combined some of the basic (Everyone, Anonymous, etc.) access tokens that are available to all processes in order to provide practically a read-only access to the system, with the exception of the folder it was executing in.
The limitation of memory was done through job objects - they allow to specify maximum memory limit.
And lastly, to limit execution time and catch all the exceptions, my Limiter attaches to the process as a debugger. Thus I can monitor the time it has spent and terminate it if it takes too long. Note, that I cannot use Job objects for this, because they only report Kernel Time and User Time for the job. A process might do something like Sleep(99999999) which would count in none of them, but still would disable the testing machine. Thus, although I don't count a processes idle time in its final execution time, it still has to have a limit.
Now, I'm no expert in low-level stuff like this. I spent a few days reading MSDN and playing around, and came up with a solution as best I could. Unfortunately it seems it's not running as well as it could be expected. For most part it seems to work fine, but weird cases keep creeping up. Just now I have a little C++ program which runs in a split second on its own, but my Limiter reports 8 seconds of User mode time (taken from job counters). Here's the code. It prints the output in about half a second and then spends more than 7 seconds just waiting:
#include <iostream>
#include <vector>
using namespace std;
int main()
{
vector< vector<int> > dp(50000, vector<int>(4, -1));
cout << dp.size();
}
The code of the limiter is pretty lengthy, so I'm not including it here. I also feel that there might be something wrong with my approach - perhaps I shouldn't do the debugger stuff. Perhaps there are some common pitfalls that I don't know of.
I would like some advice on how other people would tackle this problem. Perhaps there is already something that does this, and my Limiter is obsolete?
Added: The problem seems to be in the little program that I posted above. I've opened a new question for it, since it is somewhat unrelated. I'd still like comments on this approach for limiting a program.
Running with a debugger attached can change the characteristics of the application. Performance can be impacted, and code paths can even change (if the target process does things based on the presence of a debugger, i.e. IsDebuggerPresent).
A different approach that we've used is to configure our own application to run as the JIT debugger. By setting the AeDebug registry key, you can control what debugger is invoked when an application crashes. This way you only jump in when the target process crashes, and it doesn't impact the process during normal run-time.
This site has some details about setting the postmortem debugger: Configuring Automatic Debugging.
Your approaches for limiting the memory, getting timing etc. all sound perfectly fine.

Resources