I have a program where i test different data sets and configuration. I have a script to execute all of those.
imagine my code as :
start = omp_get_wtime()
function()
end = omp_get_wtime()
print(end-start)
and the bash script as
for a in "${first_option[#]}"
do
for b in "${second_option[#]}"
do
for c in "${third_option[#]}"
do
printf("$a $b $c \n")
./exe $a $b $c >> logs.out
done
done
done
now when i execute the exact same configurations by hand, i get varying results from 10 seconds to 0.05 seconds but when i execute the script, i get the same results on the up side but for some reason i can't get any timings lower than 1 seconds. All the configurations that manually compute at less than a second get written in the file at 1.001; 1.102; 0.999 ect...
Any ideas of what is going wrong?
Thanks
My suggestion would be to remove the ">> logs.out" to see what happens with the speed.
From there you can try several options:
Replace ">> log.out" with "| tee -a log.out"
Investigate stdbuf and if your code is python, look at "PYTHONUNBUFFERED=1" shell variable. See also: How to disable stdout buffer when running shell
Redirect bash print command with ">&2" (write to stderr) and move ">> log.out" or "| tee -a log.out" behind the last "done"
You can probably see what is causing the delay by using:
strace -f -t bash -c "<your bash script>" | tee /tmp/strace.log
With a little luck you will see which system call is causing the delay on the bottom of the screen. But it is a lot of information to process. Alternatively look for the name of your "./exe" in "/tmp/strace.log" after tracing is done. And then look for the system calls after invocation (process start of ./exe) that eat most time. Could be just many calls ... Don't spent to much time on this if you don't have the stomach for it.
I am running the script with following parms:
test.ps1 -parm1 abc1 -parm2 abc2 -parm3 abc3
I am executing script remotely from my another application and want to run only one instance of the script when are parms are same.
In other words, if all parms are same then only one instance of the script should be running at any time.
I am using the following logic but it is returning null
Get-WmiObject Win32_Process -Filter "Name='powershell.exe' AND CommandLine LIKE '%test.ps1%'"
If you ran this...
WmiObject Win32_Process -Filter "Name='powershell.exe' AND CommandLine LIKE '%test.ps1%'"
… and it returned nothing, then that means it's not running or ran and closed.
I just tried what you posted and the above pulls the process line as expected.
# begin test.ps1 script
Param
(
$parm1,
$parm2,
$parm3
)
'hello'
# end test.ps1 script
# SCC is a alias for a function I have to shell out to the console host as needed
# aka Start-ConsoleCommand
# it has code to prevent the console host from closing so I can work in it if needed.
scc -ConsoleCommand '.\test.ps1 -parm1 abc1 -parm2 abc2 -parm3 abc3
# console window results
hello
Check the process info
Get-WmiObject Win32_Process -Filter "Name='powershell.exe' AND CommandLine LIKE '%test.ps1%'"
# Results
...
Caption : powershell.exe
CommandLine : "C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe" -NoExit -Command &{ .\test.ps1 -parm1 abc1 -parm2 abc2 -parm3 abc3
...
Yet, your stated use case is kind of odd. As, since you are started the code in the first place, and you are saying, you only want it ran once, then why bother to start it again, just to check what you already know is running?
If you are saying, you need to run it multiple time sequentially, then do that sequentially.
If you are saying, that any user can use your app from any machine, then you'd still only have one running at a time from different machines, so the check really is moot. Unless you code is working (create-update-delete actions) on the same files/database, and trying to avoid errors when another users tries to use your code to act on it.
I assume you want to prevent multiple instances of the script to be running simultaneously on 1 given host.
In that case, have your script create a so called 'lock file' (just some text file at a specified location of your choice)
At the beginning of your script, check if the file exists (if it does, another instance is running; bail out!)
if it does not exist; create the lock-file, do your script business and at the end don't forget to delete the lock-file (unless the script is not allowed to run more than once on that computer)
Feel free to add additional information into the lock file (e.g. parameters being used, process-ID), to make even more versatile use of that file.
I'm adding some custom logging functionality to a bash script, and can't figure out why it won't take the output from one named pipe and feed it back into another named pipe.
Here is a basic version of the script (http://pastebin.com/RMt1FYPc):
#!/bin/bash
PROGNAME=$(basename $(readlink -f $0))
LOG="$PROGNAME.log"
PIPE_LOG="$PROGNAME-$$-log"
PIPE_ECHO="$PROGNAME-$$-echo"
# program output to log file and optionally echo to screen (if $1 is "-e")
log () {
if [ "$1" = '-e' ]; then
shift
$# > $PIPE_ECHO 2>&1
else
$# > $PIPE_LOG 2>&1
fi
}
# create named pipes if not exist
if [[ ! -p $PIPE_LOG ]]; then
mkfifo -m 600 $PIPE_LOG
fi
if [[ ! -p $PIPE_ECHO ]]; then
mkfifo -m 600 $PIPE_ECHO
fi
# cat pipe data to log file
while read data; do
echo -e "$PROGNAME: $data" >> $LOG
done < $PIPE_LOG &
# cat pipe data to log file & echo output to screen
while read data; do
echo -e "$PROGNAME: $data"
log echo $data # this doesn't work
echo -e $data > $PIPE_LOG 2>&1 # and neither does this
echo -e "$PROGNAME: $data" >> $LOG # so I have to do this
done < $PIPE_ECHO &
# clean up temp files & pipes
clean_up () {
# remove named pipes
rm -f $PIPE_LOG
rm -f $PIPE_ECHO
}
#execute "clean_up" on exit
trap "clean_up" EXIT
log echo "Log File Only"
log -e echo "Echo & Log File"
I thought the commands on line 34 & 35 would take the $data from $PIPE_ECHO and output it to the $PIPE_LOG. But, it doesn't work. Instead I have to send that output directly to the log file, without going through the $PIPE_LOG.
Why is this not working as I expect?
EDIT: I changed the shebang to "bash". The problem is the same, though.
SOLUTION: A.H.'s answer helped me understand that I wasn't using named pipes correctly. I have since solved my problem by not even using named pipes. That solution is here: http://pastebin.com/VFLjZpC3
it seems to me, you do not understand what a named pipe really is. A named pipe is not one stream like normal pipes. It is a series of normal pipes, because a named pipe can be closed and a close on the producer side is might be shown as a close on the consumer side.
The might be part is this: The consumer will read data until there is no more data. No more data means, that at the time of the read call no producer has the named pipe open. This means that multiple producer can feed one consumer only when there is no point in time without at least one producer. Think of it of door which closes automatically: If there is a steady stream of people keeping the door always open either by handing the doorknob to the next one or by squeezing multiple people through it at the same time, the door is open. But once the door is closed it stays closed.
A little demonstration should make the difference a little clearer:
Open three shells. First shell:
1> mkfifo xxx
1> cat xxx
no output is shown because cat has opened the named pipe and is waiting for data.
Second shell:
2> cat > xxx
no output, because this cat is a producer which keeps the named pipe open until we tell him to close it explicitly.
Third shell:
3> echo Hello > xxx
3>
This producer immediately returns.
First shell:
Hello
The consumer received data, wrote it and - since one more consumer keeps the door open, continues to wait.
Third shell
3> echo World > xxx
3>
First shell:
World
The consumer received data, wrote it and - since one more consumer keeps the door open, continues to wait.
Second Shell: write into the cat > xxx window:
And good bye!
(control-d key)
2>
First shell
And good bye!
1>
The ^D key closed the last producer, the cat > xxx, and hence the consumer exits also.
In your case which means:
Your log function will try to open and close the pipes multiple times. Not a good idea.
Both your while loops exit earlier than you think. (check this with (while ... done < $PIPE_X; echo FINISHED; ) &
Depending on the scheduling of your various producers and consumers the door might by slam shut sometimes and sometimes not - you have a race condition built in. (For testing you can add a sleep 1 at the end of the log function.)
You "testcases" only tries each possibility once - try to use them multiple times (you will block, especially with the sleeps ), because your producer might not find any consumer.
So I can explain the problems in your code but I cannot tell you a solution because it is unclear what the edges of your requirements are.
It seems the problem is in the "cat pipe data to log file" part.
Let's see: you use a "&" to put the loop in the background, I guess you mean it must run in parallel with the second loop.
But the problem is you don't even need the "&", because as soon as no more data is available in the fifo, the while..read stops. (still you've got to have some at first for the first read to work). The next read doesn't hang if no more data is available (which would pose another problem: how does your program stops ?).
I guess the while read checks if more data is available in the file before doing the read and stops if it's not the case.
You can check with this sample:
mkfifo foo
while read data; do echo $data; done < foo
This script will hang, until you write anything from another shell (or bg the first one). But it ends as soon as a read works.
Edit:
I've tested on RHEL 6.2 and it works as you say (eg : bad!).
The problem is that, after running the script (let's say script "a"), you've got an "a" process remaining. So, yes, in some way the script hangs as I wrote before (not that stupid answer as I thought then :) ). Except if you write only one log (be it log file only or echo,in this case it works).
(It's the read loop from PIPE_ECHO that hangs when writing to PIPE_LOG and leaves a process running each time).
I've added a few debug messages, and here is what I see:
only one line is read from PIPE_LOG and after that, the loop ends
then a second message is sent to the PIPE_LOG (after been received from the PIPE_ECHO), but the process no longer reads from PIPE_LOG => the write hangs.
When you ls -l /proc/[pid]/fd, you can see that the fifo is still open (but deleted).
If fact, the script exits and removes the fifos, but there is still one process using it.
If you don't remove the log fifo at the cleanup and cat it, it will free the hanging process.
Hope it will help...
Let's say I have a bash function
Yadda() {
# time-consuming processes that must take place sequentially
# the result will be appended >> $OUTFILE
# $OUTFILE is set by the main body of the script
# No manipulation of variables in the main body
# Only local-ly defined variables are manipulated
}
Am I allowed to invoke the function as a background job in a subshell? E.g.:
OUTFILE=~/result
for PARM in $PARAMLIST; do
( Yadda $PARM ) &
done
wait
cat $OUTFILE
What do you think?
You can invoke the function as a background job in a subshell. It will work just like you typed in your example.
I see one problem in the way you demonstrated it in your example. If some of the processes finish simultaneously, they will try to write to the OUTFILE at the same time and the output might get mixed up.
I suggest to let each process write to it's own file then collect the files after all processes are done.
I am running a PowerShell script from within a batch file. The script fetches a web page and checks whether the page's content is the string "OK".
The PowerShell script returns an error level to the batch script.
The batch script is executed by ScriptFTP, an FTP automation program. If an error occurs, I can have ScriptFTP send the full console output to the administrator via E-Mail.
In the PowerShell script, I would like to output the return value from the web site if it is not "OK", so the error message gets included in the console output, and thus in the status mail.
I am new to PowerShell and not sure which output function to use for this. I can see three:
Write-Host
Write-Output
Write-Error
What would be the right thing to use to write to the Windows equivalent of stdout?
Simply outputting something is PowerShell is a thing of beauty - and one its greatest strengths. For example, the common Hello, World! application is reduced to a single line:
"Hello, World!"
It creates a string object, assigns the aforementioned value, and being the last item on the command pipeline it calls the .toString() method and outputs the result to STDOUT (by default). A thing of beauty.
The other Write-* commands are specific to outputting the text to their associated streams, and have their place as such.
I think in this case you will need Write-Output.
If you have a script like
Write-Output "test1";
Write-Host "test2";
"test3";
then, if you call the script with redirected output, something like yourscript.ps1 > out.txt, you will get test2 on the screen test1\ntest3\n in the "out.txt".
Note that "test3" and the Write-Output line will always append a new line to your text and there is no way in PowerShell to stop this (that is, echo -n is impossible in PowerShell with the native commands). If you want (the somewhat basic and easy in Bash) functionality of echo -n then see samthebest's answer.
If a batch file runs a PowerShell command, it will most likely capture the Write-Output command. I have had "long discussions" with system administrators about what should be written to the console and what should not. We have now agreed that the only information if the script executed successfully or died has to be Write-Host'ed, and everything that is the script's author might need to know about the execution (what items were updated, what fields were set, et cetera) goes to Write-Output. This way, when you submit a script to the system administrator, he can easily runthescript.ps1 >someredirectedoutput.txt and see on the screen, if everything is OK. Then send the "someredirectedoutput.txt" back to the developers.
I think the following is a good exhibit of Echo vs. Write-Host. Notice how test() actually returns an array of ints, not a single int as one could easily be led to believe.
function test {
Write-Host 123
echo 456 # AKA 'Write-Output'
return 789
}
$x = test
Write-Host "x of type '$($x.GetType().name)' = $x"
Write-Host "`$x[0] = $($x[0])"
Write-Host "`$x[1] = $($x[1])"
Terminal output of the above:
123
x of type 'Object[]' = 456 789
$x[0] = 456
$x[1] = 789
You can use any of these in your scenario since they write to the default streams (output and error). If you were piping output to another commandlet you would want to use Write-Output, which will eventually terminate in Write-Host.
This article describes the different output options: PowerShell O is for Output
What would be the right thing to use to write to the Windows equivalent of stdout?
In effect, but very unfortunately, both Windows PowerShell and PowerShell Core as of v7.2, send all of their 6(!) output streams to stdout when called from the outside, via PowerShell's CLI.
See GitHub issue #7989 for a discussion of this problematic behavior, which likely won't get fixed, so as to preserve backward compatibility.
In practice, this means that whatever PowerShell stream you send output to will be seen as stdout output by an external caller:
E.g., if you run the following from cmd.exe, you'll see no output, because the stdout redirection to NUL applies equally to all PowerShell streams:
C:\>powershell -noprofile -command "Write-Error error!" >NUL
However - curiously - if you redirect stderr, PowerShell does send its error stream to stderr, so that with 2> you can capture the error-stream output selectively; the following outputs just 'hi' - the success-stream output - while capturing the error-stream output in file err.txt:
C:\>powershell -noprofile -command "'hi'; Write-Error error!" 2>err.txt
The desirable behavior is:
Send PowerShell's success output stream (number 1) to stdout.
Send output from all other streams to stderr, which is the only option, given that between processes only 2 output streams exist - stdout (standard output) for data, and stderr (standard error) for error messages and all other types of messages - such as status information - that aren't data.
It's advisable to make this distinction in your code, even though it currently isn't being respected.
Inside PowerShell:
Write-Host is for display output, and bypasses the success output stream - as such, its output can neither be (directly) captured in a variable nor suppressed nor redirected.
Its original intent was simply to create user feedback and create simple, console-based user interfaces (colored output).
Due to the prior inability to be captured or redirected, PowerShell version 5 made Write-Host write to the newly introduced information stream (number 6), so since then it is possible to capture and redirect Write-Host output.
Write-Error is meant for writing non-terminating errors to the error stream (number 2); conceptually, the error stream is the equivalent of stderr.
Write-Output writes to the success [output] stream (number 1), which is conceptually equivalent to stdout; it is the stream to write data (results) to.
However, explicit use of Write-Output is rarely needed due to PowerShell's implicit output feature:
Output from any command or expression that isn't explicitly captured, suppressed or redirected is automatically sent to the success stream; e.g., Write-Output "Honey, I'm $HOME" and "Honey, I'm $HOME" are equivalent, with the latter not only being more concise, but also faster.
See this answer for more information.
Write-Host "Found file - " + $File.FullName -ForegroundColor Magenta
Magenta can be one of the "System.ConsoleColor" enumerator values - Black, DarkBlue, DarkGreen, DarkCyan, DarkRed, DarkMagenta, DarkYellow, Gray, DarkGray, Blue, Green, Cyan, Red, Magenta, Yellow, White.
The + $File.FullName is optional, and shows how to put a variable into the string.
You simply cannot get PowerShell to ommit those pesky newlines. There is no script or cmdlet that does this.
Of course Write-Host is absolute nonsense because you can't redirect/pipe from it! You simply have to write your own:
using System;
namespace WriteToStdout
{
class Program
{
static void Main(string[] args)
{
if (args != null)
{
Console.Write(string.Join(" ", args));
}
}
}
}
E.g.
PS C:\> writetostdout finally I can write to stdout like echo -n
finally I can write to stdout like echo -nPS C:\>