VBscript scheduled task error - vbscript

I have a VBScript that does a number of tasks, including moving files from one place to another.
Lots of copy/move/create folder/delete folder/delete files like this
Set filesys = CreateObject("Scripting.FileSystemObject")
filesys.CopyFile "D:\Test Now\test.txt", "W:\test_2\test.txt"
I'm able to get the entire script to work when I run it manually by double clicking it and no errors come up. However, when I run it from scheduled task an error is shown "Path Not Found". I found this error by writing this error to a DB.
If
ON ERROR RESUME NEXT
is off the script is stuck in that error. With it set to on, the script would skip the operations not carry out its function.
I've quadruple checked the paths to make sure its right. Is there something I should be aware of when running scripts with scheduled tasks?

Are D:\ And W:\ local drives, or are they mapped network drives? If they're mapped, the user running the process might not have those drives available. Be sure you run the task as a privileged local account. It's also best to log in to that account and run the command manually. Once you verify that it works, then you can tighten up security if that's a concern.

This jumps to mind: enclose your paths in quotes ". One of the two has a space in it. And VBSscript certainly won't like that.
filesys.CopyFile "D:\Test Now\test.txt", "W:\test_2\test.txt"
Yup, just tested it, and that is indeed what was causing the error.
By the way, you may know this, but unless you want to enter a debugging nightmare, you shouldn't use On Error Resume Next unless you have a very specific reason to do so .

An old topic I know, but this may help someone:
My scheduled task was running under a specific user. Using the component services snap in for MMC I needed to give the user 'Launch and activation' permissions. Once this was done, the scheduled task ran correctly.

Related

Jenkins Build Never Finishing

I have a Jenkins master/slave set up which has been working quite happily, running Oracle imports on some Linux boxes.
I have just added a new slave node and tried to run our existing database import job on this new node. This job consists of three subprojects; the first one runs some execute shells, copying files and changing permissions and this currently completes successfully, the second runs an execute shell which ends with an Oracle impdp. The impdp completes (the db exists and ps -ef no longer shows impdp running) but the Jenkins subproject never finishes. The UI just sits there with the clock whirring around.
I've tried adding an echo after the impdp, and this also executes correctly, but the subproject still never finishes.
If I add a Post-Build email notification, it is not sent.
The third subproject is never reached.
What could be the cause of this and how do I debug what is happening?
In our case, the jobs would declare "Finished: SUCCESS", but then continue with some unknown Jenkins business for another 10 or 20 minutes. After putting on more detailed logging, we found it was related to the ill-named LogRotator.
We have thousands of old builds and are deleting the artifacts for those older than a certain number of days. Because of the way old builds are handled, Jenkins searches the entire list of old builds even though they have already had their artifacts removed.
There is issue that is now fixed related to this: https://issues.jenkins-ci.org/browse/JENKINS-22607
As of right now I do not see it in a release, but if you have this issue, the temporary workaround is to turn off the deletion.
This turned out to be something horrible :-)
After finishing the work, Jenkins tries to kill all processes it spawned. To identify them, it goes through all processes in the OS, reads from /proc/<pid>/environ (this is a Linux box) which contains the process’ environment variables and compares them with the environment it sets to Jenkins processes.
Problem was there was one particular Oracle process running on our db server where if you tried to read from /proc/pid/environ for it, it would just hang forever – which is where the Jenkins code would get stuck.
I have no idea why it was getting stuck like this and nor did our DBA. We restarted it and now it works.
You can add set +x to the top of shell scripts to see which commands are actually executed. That way you should be able to easily see from the output which command is blocking.

Jenkins Timeout because of long script execution

I have some Issues regarding Jenkins and running a Powershell Script within. Long Story short: the Script takes 8x longe execution time then running it manually (takes just a few minutes) on the Server(Slave).
Im wondering why?
In the script are functions which which invoke commands like & msbuild.exe or & svn commit. I found out that the script hangs up in those Lines where before metioned commands are executed. The result is, that Jenkins time out because the Script take that long. I could alter the Timeout threshold in the Jenkins Job Configuration but i dont think this is the solution for the problem
There are no error ouputs or any information why it takes that long and i do not have any further Idea for the reason. Maybe one of you could tell me, how Jenkins invokes internaly those commands.
This is what Jenkins does (Windows batch plugin):
powershell -File %WORKSPACE%\ScriptHead\DeployOrRelease.ps1
I've created my own Powershell CI Service before I found that Jenkins supports it's own such plugin. But in my implementation and in my current jobs configs we follow sample segregation principle rule: more is better better. I found that my CI Service works better when is separated in different steps (also in case of error it's a lot easy for a root cause analyse). The Single responsibility principle is also helpful here. So as in Jenkins we have pre- & post-, build and email steps as separate script. About
msbuild.exe
As far as I remember in my case there were issues related with the operations in FileSystem paths. So when script was divided/separated in different functions we had better performance (additional checks of params).
Use "divide and conquer" technique. You have two choices: modify your script so that will display what is doing and how much it takes for every step. Second option is to make smaller scripts to perform actions like:
get the code source,
compile/build the application,
run the test,
create a package,
send the package,
archive the logs
send notification.
The most problematic is usually the first step: To get the source code from GIT or SVN or Mercurial or whatever you have as version control system. Make sure this step is not embeded into your script.
During the job run, Jenkins capture the output and use AJAX to display the result in your browser. In the script make sure you flush standard output for every step or several steps. Some languages cache standard output so you can see the results only at the end.
Also you can create log files that can be helpful to archive and verify activity status for older runs. From my experience using Jenkins with more then 10 steps requires you to create a specialized application that can run multiple steps like "robot framework".

Other causes for VB6 trappable error Path/File access error (Error 75)?

Apart from those listed by microsoft here.
10 DBEngine.CompactDatabase Dbpath, DbTempPath, "", dbEncrypt
20 Kill Dbpath
30 Name DbTempPath As DbPath
The above code operates day in day out in a lot of installations but then extremely rarely line 30 fails and I get a call that the database is missing.
Today for the first time I saw it happen myself and the error that was thrown:
Path/File access error (Error 75)
However I don't think that any of the listed causes apply in this situation.
When this happened at an installation today and I renamed the temp file and ran the code again and the error occurred again.
(I think it just might have something to do with a hardware issue as making a copy of the file took a very long time.)
There's not really enough information here to say, but my guess would be that the problem is with your KILL statement not finishing before the NAME statement runs. Its never been clear to me, but it seems that the Windows NT File System has the option of implementing some file operations (especially for large files) asynchronously, so that the KILL may not be completed by NTFS, even though VB6 thinks that it is and has moved on to the NAME statement.
Probably the best thing would be to put some check after the KILL to make sure that the file is actually gone, before starting the rename with NAME.
I'm not sure why the problem occurs but you could add a workaround using a DoEvents call or writing a small procedure to wait for a second or three (or longer) to allow time for the disk to complete the deletion or Access to release the file.
A more advanced workaround would be to write a function to check if the file is available before calling the rename.

How to run a specified bat file at particular time ie.. scheduler

I have been using the at command to schedule the task
ex: at 14:45 my.bat
and i am getting the o/p on the command prompt as
"JOB ID is added"
But this command is not getting fired on the time which i have scheduled..
Can anyone please help me.......
I suspect the issue is not that the BAT file is not executing at all, but rather either or both of i) individual commands within the BAT file are failing, or ii) the output isn't getting sent to the place you're looking for it. (Things get even weirder if anything in the batch file requests input, since a) by default batch files may not be able to interact with the "console" at all and b) the system is probably unattended anyway at the time the batch file executes.) If my suspicion is right, there is no one "fix-everything" but rather a whole bunch of small fixes ...and you have to hit every one of them.
Find out the needed password to actually login as 'admin' (rather than your usual user), open a DOS box, and try to run the batch file. There should be some sort of error message that you can see. Fix that problem. Then try again ...and fix the next problem. Keep correcting problems and trying again until finally everything works.

Changing environment in Hudson, that stays for the whole build

how can I execute a batch-file or just some (e.g. twice) commands in a job of Hudson (running on windows xp, as a non-service, but may change), that the environment just stays for the whole build.
I need to do this, because I have to change the current path with 'cd' (we are using relative paths in our proj) and 'set' some environment-variables for msbuild.
Thank you in Advance.
Not sure why you need to get out of the service realm. My understanding was so far that Hudson starts a new environment for every job, so that the jobs don't interfere with each other. So if you don't use commands that effect other ennvironments (e.g. subst) you will be fine with adding a "Execute Windows Batch Command".
If your service runs with the wrong permissions, you have two options. First, change the permission of the service (run it under a different user than the local system user) or call the runas command. If for whatever reason you still need to contain changes to certain parts of your job you can always call cmd to create a new environment.

Resources