Pretty new to Jenkins and I have simple yet annoying problem. When I run job (Build) on Jenkins I am triggering ruby command to execute my test script.
Problem is Jenkins is not displaying output in real time from console. Here is trigger log.
Building in workspace /var/lib/jenkins/workspace/foo_bar
No emails were triggered.
[foo_bar] $ /bin/sh -xe /tmp/hudson4042436272524123595.sh
+ ruby /var/lib/jenkins/test-script.rb
Basically it hangs on this output until build is complete than it just shows full output. Funny thing is this is not consistent behavior, sometimes it works as it should. But most of the time there is no real time console output.
Jenkins version: 1.461
To clarify some of the answers.
ruby or python or any sensible scripting language will buffer the output; this is in order to minimize the IO; writing to disk is slow, writing to a console is slow...
usually the data gets flush()'ed automatically after you have enough data in the buffer with special handling for newlines. e.g. writing a string without newline then sleep() would not write anything until after the sleep() is complete (I'm only using sleep as an example, feel free to substitute with any other expensive system call).
e.g. this would wait 8 seconds, print one line, wait 5 more seconds, print a second line.
from time import sleep
def test():
print "ok",
time.sleep(3)
print "now",
time.sleep(5)
print "done"
time.sleep(5)
print "again"
test()
for ruby, STDOUT.sync = true, turns the autoflush on; all writes to STDOUT are followed by flush(). This would solve your problem but result in more IO.
STDOUT.sync = true
for python, you can use python -u or the environment variable PYTHONUNBUFFERED to make stdin/stdout/stout not buffered, but there are other solutions that do not change stdin or stderr
export PYTHONUNBUFFERED=1
for perl, you have autoflush
autoflush STDOUT 1;
Make sure your script is flushing its stdout and stderr.
In my case I had a buffering issue similar to what you describe but I was using python.
The following python code fixed it for me:
import sys
sys.stdout.flush()
I'm not a Ruby coder, but Google reveals the following:
$stdout.flush
It seems to me that python -u works as well.
E.g. In batch command
python -u foo.py
Easiest solution here is to turn on syncing buffer to output. Something that #Craig wrote about in his answer but one line solution that will cover whole script, and not require you to flush buffer many times.
Just write
STDOUT.sync = true
Logic behind is simple, to avoid using IO operations many times output is buffered. To disable this use
STDOUT.sync = false
This is Ruby solution ofc.
Each of the other answers is specific to one program or another, but I found a more general solution here:
https://unix.stackexchange.com/a/25378
You can use stdbuf to alter the buffering behavior of any program.
In my case, I was piping output from a shell script through tee and grep to split lines into either the console or a file based on content. The console was hanging as described by OP. This solved it:
./slowly_parse.py login.csv |tee >(grep -v LOG: > out.csv) | stdbuf -oL -eL grep LOG:
Eventually I discovered I could just pass --line-buffered to grep for the same result:
./slowly_parse.py login.csv |tee >(grep -v LOG: > out.csv) | grep --line-buffered LOG:
The other answers are correct in saying that you need to ensure standard output is not buffered.
The other thing to be aware of is that Jenkins itself does line by line buffering. If you have a slow-running process that emits single characters (for example, an nunit test suite summary that prints a . for a successful test and an E for an error) you will not see anything until the end of line.
[True for my Jenkins 1.572 running on a Windows box.]
For some commands, including tee a the best choice for unbuffering is a program called unbuffer from expect package.
Usage example:
instead of
somecommand | tee /some/path
do
somecommand | unbuffer -p tee /some/path
Sources and more info:
https://stackoverflow.com/a/11337310/2693875
https://unix.stackexchange.com/a/25375/53245
The Operating-System is buffering output-data by nature, to save CPU, and so does Jenkins.
Looks like you are using a shell-command to run your Ruby script -
I suggest running your Ruby script directly via the dedicated plugin:
Jenkins Ruby Plugin
(may need to install it)
Python buffered its output traces and print it at the end of script to minimize writing on console as writing to console is slow.
You can use following command after your traces. It will flush all traces to console, which are queued before that command.
sys.stdout.flush()
Related
I began with playing ctfs challenges, and I encountered a problem where I needed to send an exploit into a binary and then interact with the spawned shell.
I found a solution to this problem which looks something like this:
(echo -ne "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\xbe\xba\xfe\xca" && cat) | nc pwnable.kr 9000
Meaning:
without the "cat" sub-command, I couldn't interact with the shell, but with it, i now able to send commands into the spawned shell and get the returned output to my console stdout.
What exactly happens there? this command line confuses me
If you just type in cat at the command line, you'll be able to see that this command simply copies stdin to stdout one line at a time. It will carry on doing this until you either quit with Ctrl-C or send an EOF with Ctrl-D.
In this example you're running cat immediately after successfully printing the payload (the concatenator && tells the shell to run the second command only if the first command has an exit code of zero; i.e., no error). As a result, the remote terminal won't see an EOF until you terminate it as described above. When this is piped to nc, everything you type in is sent via cat to the remote server, and everything it sends back appears on your stdout.
So yes, in effect you end up with an interactive shell. You can get pretty much the same effect on your own machine by running cat | sh.
I have this need to collect\log all the command lines that were used to start a process on my machine during the execution of a Perl script which happens to be a test automation script. This Perl script starts the executable in question (MySQL) multiple times with various command lines and I would like to inspect all of the command lines of those invocations. What would be the right way to do this? One possibility i see is run something like "ps -aux | grep mysqld | grep -v grep" in a loop in a shell script and capture the results in a file but then I would have to do some post processing on this and remove duplicates etc and I could possibly miss some process command lines because of timing issues. Is there a better way to achieve this.
Processing the ps output can always miss some processes. It will only capture the ones currently existing. The best way would be to modify the Perl script to log each command before or after it executes it.
If that's not an option, you can get the child pids of the perl script by running:
pgrep -P $pid -a
-a gives the full process command. $pid is the pid of the perl script. Then process just those.
You could use strace to log calls to execve.
$ strace -f -o strace.out -e execve perl -e 'system("echo hello")'
hello
$ egrep ' = 0$' strace.out
11232 execve("/usr/bin/perl", ["perl", "-e", "system(\"echo hello\")"], 0x7ffc6d8e3478 /* 55 vars */) = 0
11233 execve("/bin/echo", ["echo", "hello"], 0x55f388200cf0 /* 55 vars */) = 0
Note that strace.out will also show the failed execs (where execve returned -1), hence the egrep command to find the successful ones. A successful execve call does not return, but strace records it as if it returned 0.
Be aware that this is a relatively expensive solution because it is necessary to include the -f option (follow forks), as perl will be doing the exec call from forked subprocesses. This is applied recursively, so it means that your MySQL executable will itself run through strace. But for a one-off diagnostic test it might be acceptable.
Because of the need to use recursion, any exec calls done from your MySQL executable will also appear in the strace.out, and you will have to filter those out. But the PID is shown for all calls, and if you were to log also any fork or clone calls (i.e. strace -e execve,fork,clone), you would see both the parent and child PIDs, in the form <parent_pid> clone(......) = <child_pid> so then you should hopefully then have enough information to reconstruct the process tree and decide which processes you are interested in.
i want to implement a shell-script that runs the command xyz and stores its output in a variable, but at the same time forwarding the commands output to the shell-scripts stdout.
This is because I want to launch this script via launchd, let it automatically log the script's output, but then also let the script push the individual commands output to the web. The script should not simply buffer the commands output and print it after it ran, but rather in real time.
Is something like this possible, and if, how do you implement it?
Thanks
thel30n
you are looking for the command:
$VAR=$(echo 'test' | tee /dev/tty)
test
echo $VAR
test
I believe there is no way to save log into a shell variable and avoid buffering command's output at the same time. But the alternative way would be saving log messages to a file using tee(1). For example:
LOGFILE=/path/to/logfile
run_and_log() {
$# | tee -a "$LOGFILE"
}
run_and_log xyz
I am running ubuntu 13.10 and want to write a bash script that will execute a given task at non-pre-determined time intervals. My understanding of this is that cronjobs require me to know when the task will be performed again. Thus, I was recommended to use "at."
I'm having a bit of trouble using "at." Based on some experimentation, I've found that
echo "hello" | at now + 1 minutes
will run in my terminal (with and without quotes). Running "atq" results in my computer telling me that the command is in the queue. However, I never see the results of the command. I assume that I'm doing something wrong, but the manpages don't seem to be telling me anything useful.
Thanks in advance for any help.
Besides the fact that commands are run without a terminal (output and input is probably redirected to /dev/null), your command would also not run since what you're passing to at is not echo hello but just hello. Unless hello is really an existing command, it won't run. What you want probably is:
echo "echo hello" | at now + 1 minutes
If you want to know if your command is really running, try redirecting the output to a file:
echo "echo hello > /var/tmp/hello.out" | at now + 1 minutes
Check the file later.
Similar to redirect COPY of stdout to log file from within bash script itself, but I'd also like to preserve stdout as a TTY device.
For example, I have the following scripts:
/tmp/teed-off$ cat some-script
#!/usr/bin/env ruby
if $stdout.tty?
puts "stdout is a TTY"
else
puts "stdout is NOT a TTY"
end
/tmp/teed-off$ cat wrapper
#!/usr/bin/env bash
exec > >(tee some-script.log)
./some-script
When I run them, the wrapper eats stdout as a TTY device:
/tmp/teed-off$ ./some-script
stdout is a TTY
/tmp/teed-off$ ./wrapper
stdout is NOT a TTY
How can I flip that behavior around so that the script believes that its in a TTY even when executed via the wrapper?
It won't be trivial, but I think you can do it via pseudo-ttys. I'm not sure that there's any standard tool, other than perhaps expect, that would do it for you.
It takes a bit of thinking about. You'd have a control program that would open the pseudo-tty master, then the slave. The slave would be connected to the output of ./some-script. The master would be read by the control program, which would copy the data it reads from the master to the file and to standard output.
I've not tried coding that up. I'm not sure whether you could do it with standard shell commands; I can't think of any way. So, I think there will be some C coding to be done.
look for dup2 it duplicates a file descriptor
int dup2(int oldfd, int newfd);