Is there a way to see if any PowerShell script has executed at-least once before, without the script itself creating any logs? I.e. is there some sort of a native record keeping mechanism for already executed scripts (an example would be an event generating and hence creating an event log), meaning a log of the script actually executed was made at runtime but externally?
For example : If script A was executed once today, check during the second execution (say 2 days later) of script A if it had already executed before.
Can this be done through any Event logs, or through environment variables?
EDIT: Please note, for this particular script, no text files or logs can be made. Is there a way to do this without actually leaving a trace "physically" but instead relying on any parameters being set when a script executes?
EDIT2: This script would be executing with the least of privileges, so not only an account which does not have admin permissions, but would also not have approved permissions to create text or log files.
How about self-modifying code? It's technically cheating, as the change is being done in the script file itself. No external logging is done, though.
write-host -nonewline "Script has started"
$src = get-content $MyInvocation.MyCommand
$header = $src[0]
if($header -notmatch "^#") {
write-host " ...first time!"
$newScript = #()
$newScript += "#"
$newScript += $src
set-content $MyInvocation.MyCommand $newScript
} else { write-host " ...nth time!" }
The script reads its own contents. If the first line doesn't start with a hash, it's the first invokation. The script then creates a new set of source code. The first line is a hash, original source comes after that. Then the original script is overwritten.
As a side note, the requirement to log-but-not-log is self-contradictory just as #Mathias R. Jessen pointed out. There is, however, process tracking audit available in Windows. Also, there is script block logging. In theory (and with proper permissions), one might search the Windows' event logs for previous run attempts.
I'd much rather pop the why stack and find out the underlying reason, as the requirement to eat and keep a cacke sounds very much like an XY problem.
Related
The application moves files from one directory to another, runs an exe, and then moves files from one directory to another.
When I run the application manually it works as expected.
However, when trying to run it as a scheduled task I get the following error: 3762504530
I did some researching and it appears it may have to do with the application trying to run interactively even when there is no user actually logged in.
I have tried to suppress outputs but that didn't seem to have any effect.
Without seeing the code i guess u use console output or similar...
If so change write-host to write-output or alias "echo" pipe it to log file if u want...
Also be sure that your script run "non interactive" (no prompts etc.)
Unchecking compile a graphic windows program (parameter -noConsole), remedied the error.
I am looking for a strategy suggestion.
I am very new to Linux shell scripting. Just learning tcsh not more than a month.
I need a script to automatically detects when is the result files are done copied back from a remote server to a folder in a remote machine, then start scp the files back to my workstation.
I do not know in advance when the job will finish run, so the folder could have no result files for a long while. I also do not know when will the last result file done copied back from remote server to the folder (and thus can start the scp).
I had tried crontab. Work fine when I guess correctly, most of the time just disappointing.
So I tried to write a script myself and I have it now. I intend to produce a script that serves me and my colleagues too.
To use the script, the user first need to login to the remote machine manually. Then only execute the script at remote machine. The script first asks for user to input their local machine name and directory where they wish to save the result files.
Then the script will just looping to test when is the total number of files change. When it detected that, which means the first result file is starting to be copied back from the remote server, then it loops again to detect when is the total files size in the folder stop changing, which means last result file is finished copied to the folder. After that it executes scp to send all the result files to the user workstation, at the initially specified directory.
Script works fine but I wish to make the script able to run in background and still running by itself even if the user logout from the remote machine and close the terminal. And also I wish to let the user just type in a simple command in terminal to start the script, something like a simple
./script.tcsh
I tried to run the script by command
./script.tcsh &
but fails, because background process unable to accept user input.
Google and found something called disown, but the command is not found. Apparently the remote machine and my machine does not support this command.
Tried to modify the script to first accept the user input, then attempt to use
cat > temp_script.tcsh << EOF
{rest of my script}
EOF
and then a line of
./temp_script.tcsh &
to try to create another script file and use the first script to initiate the second script in background. Also fail, because cat does not treat $variable as a literal text, it replaces it with values. I have a foreach i(1 2) loop, and the cat command just keep reporting error (missing value of variable i, which is just a counter in foreach loop syntax).
I am out of idea at the moment.
Can anyone enlighten me with some strategy that I can try myself?
The goal is to use only 1 script file, and prompt user for 2 inputs (machine name and directory to save), then no more interaction with user or waiting, and able to run even closing the terminal.
Note: I do not need password to login to remote machine and back.
I am running a powershell script from a batch file:
try {
Start-Transcript -path ("C:\PS\Logs\XXXX_Session_QA_" + (Get-Date).tostring("yyyyMMdd-hhmmss-tt") + ".txt")
<Rest of the Code>
}
catch {
stop-transcript
}
Every time I run the script, I see the error
Error: Transcription has not been started. Use the start-transcript command to start transcription.
I have gone through some or most of the previous posts and the code should work but it isn't triggering at all. The initial log file isn't being created either. Any idea why this could be happening or any help here?
Does the folder structure exist for where you are trying to write the transcript to? According to the documentation, it is required (see italics below):
-Path
Specifies a location for the transcript file. Enter a path to a .txt
file. Wildcards are not permitted. If you do not specify a path,
Start-Transcript uses the path in the value of the $Transcript global
variable. If you have not created this variable, Start-Transcript
stores the transcripts in the $Home\My Documents directory as
\PowerShell_transcript..txt files.
If any of the directories in the path do not exist, the command fails.
https://technet.microsoft.com/library/05b8f72c-ae3b-45d5-95e0-86aa1ca1908a(v=wps.630).aspx
You will also need permission to write to that path of course.
So... if the Start-Transcript cmdlet is throwing an error, you are then catching it in the catch{} block (invisibly) which then executes the Stop-Transcript.
This presumably is what is actually causing the error message: the net result is that you are trying to stop transcription when it never started in the first place.
I want to create a PowerShell script that sets the system volume to a specified level, and then runs an audio file found in the same directory as the script. I've figured out the first part, but I can't seem to manage the second. It is important that the folder can be moved around and renamed, and the script should still work. This creates the problem that I cannot simply use Invoke-Item and then specify the filename, as the path is subject to change.
Edit:
My attempt:
$player = New-Object System.Media.SoundPlayer "$env:userprofile\SoundFile.wav"
$player.Play()
Start-Sleep -s 10
This has the problem that it doesn't work if the path is changed.
Found a solution: I used the automatically defined variable $PSScriptRoot, which had apparently been added in PS 3.0. So the line is now
$player = New-Object System.Media.SoundPlayer "$PSScriptRoot\SoundFile.wav"
I want to know how to execute a set of statements or a command in a Windows Batch file or PowerShell script to be executed just once. Even if I run the script multiple times, that particular set of code or program should just run once.
If possible give an example for both Batch files and PowerShell.
In both cases you need to make changes that (a) persist beyond running the batch file or PowerShell script and (b) are visible when you start it the next time. What those changes are would depend on how cluttered you want to leave the system.
One option would be an environment variable. You can set those from batch files with setx or from PowerShell with [Environment]::SetEnvironmentVariable. You should also set them normally to have them in the current session, just to make sure that it can't be called from that session again, too.
if defined AlreadyRun (
echo This script ran already
goto :eof
)
...
setx AlreadyRun 1
set AlreadyRun 1
or
if (Test-Path Env:\AlreadyRun) {
Write-Host This script ran already
exit
}
...
[Environment]::SetEnvironmentVariable('AlreadyRun', '1', [EnvironmentVariableTarget]::User)
'1' > Env:\AlreadyRun
This approach has its drawbacks, though. For example, it won't prevent you from running the same script twice in different processes that both existed at the time the script ran first. This is because environment variables are populated on process start-up and thus even the system-wide change only applies to new processes, not to those already running.
Another option would be a file you check for existence. This has the benefit of working even under the scenario outlined above that fails with environment variables. On the other hand, a file might be accidentally deleted easier than an environment variable.
However, since I believe your batch file or PowerShell script should do something, i.e. have a side-effect, you should probably use exactly that as your criterion for abort. That is, if the side-effect is visible somehow. E.g. if you are changing some system setting or creating a bunch of output files, etc. you can just check if the change has already been made or whether the files are already there.
A probably very safe option is to delete or rename the batch file / PowerShell script at the end of its first run. Or at least change the extension to something that won't be executed easily.