Ctest with -j option yeids different results with j1 and jn - bash

I'm setting up unit test framework using ctest and cmake. The idea is to have the test command executed in docker container and the test will execute inside the container. That is the requirement.
the add_test looks like this
add_test(test_name, /bin/sh, runner.sh test_cmd)
where runner.sh is the script that runs the container and
test_cmd is the test command that runs in container.
test_cmd is like this
/path/to/test/test_binary; CODE=$?; echo $CODE > /root/result.txt;
runner.sh has this code
docker exec -t -i --user root $CONTAINERNAME bash -c "test_cmd"
runner.sh further tries to read /root/result.txt from container.
runner.sh spawns new container for each test. Each test runs in its own container
So there is no way they can interfere with one another when executed in parallel.
/root/result.txt is separate for each container.
when I run the tests like this
make test ARGS="-j8"
for some specific tests /root/result.txt is not generated. Hence the reading fails from that file ( docker exec for test_cmd already returns )
And I cannot see stdout of those tests in LastTest.log
when I run the tests like this
make test ARGS="-j1"
All tests pass. /root/result.txt is generated for all tests and I can see the output (stdout) of those tests
same behavior is there for j > 1.
Tests are not being timed out. I checked.
My guess is that, before
echo $CODE > /root/result.txt;
I'm trying to read the exit status from /root/result.txt but again how does it pass in -j1 and in sh its sequential execution. Until one command exits it doesn't move ahead.
One interesting observation is that when I try to do it (docker exec, same command) from python script using subprocess instead of bash it works.
def executeViaSubprocess(cmd, doOutput=False):
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
stdout, stderr = p.communicate()
retCode = p.returncode

Related

how to execute bash script with options in Robot Framework

I want to execute some kind of bash script in Robot Framework.
In terminal I use that command:
bash /home/Documents//script.sh --username=root --password=hello --host=100.100.100.100 --port=400 - --data='{"requestId":1,"parameters":{"name":"check","parameters":{"id":"myID"}}}'
and it works
In robot script I try with:
Running script
${result} = Run Process bash /home/Documents//script.sh "username\=root" "password\=hello" "host\=100.100.100.100" "port\=400" "data\='{"requestId":1,"parameters":{"name":"check","parameters":{"id":"myID"}}}'" shell=True stdout=stdout.txt
Log To Console ${result}
Log ${result}
Log ${result.stdout}
Log ${result.stderr}
But I get Missing required arguments: username, password, host, port.
Process doesn't recognise arguments.
How to pass script arguments in Robot Framework with Process Library?
Please show examples, I checked already doc in Process Library for Specifying command and arguments but I don't understand it.
After the night I found solution:
Running script
${result} = Run Process bash /home/Documents//script.sh username\=root password\=hello host\=100.100.100.100 port\=400 data\='{"requestId":1,"parameters":{"name":"check","parameters":{"id":"myID"}}}' shell=True stdout=stdout.txt
Options should be unquoted but = should be escaped with \

Detect if docker ran successfully within the same script

My script.sh:
#/bin/sh
docker run --name foo
(Just assume that the docker command works and the container name is foo. Can't make the actual command public.)
I have a script that runs a docker container. I want to check that it ran successfully and echo the successful running status on the terminal.
How can I accomplish this using the container name? I know that I have to use something like docker inspect but when I try to add that command, it only gets executed after I ^C my script probably because docker has the execution.
In this answer, the docker is executed in some other script so it doesn't really work for my use case.
The linked answer from Jules Olléon works on permanently running services like webservers, application servers, database and similar software. In your example, it seems that you want to run a container on-demand, which is designed to do some work and then exit. Here, the status doesn't help.
When running the container in foreground mode as your example shows, it forwards the applications return code to the calling shell. Since you didn't post any code, I give you a simple example: We create a rc.sh script returning 1 as exit-code (which normally indicates some failure):
#!/bin/sh
echo "Testscript failed, returning exitcode 1"
exit 1
It got copied and executed in this Dockerfile:
FROM alpine:3.7
COPY rc.sh .
ENTRYPOINT [ "sh", "rc.sh" ]
Now we build this image using docker build -t rc-test . and execute a short living container:
$ docker run --rm rc-test
Testscript failed, returning exitcode 1
Bash give us the return code in $?:
$ echo $?
1
So we see that the container failed and could simply check them e.g. inside some bash script with an if-condition to perform some action when it fails:
#!/bin/bash
if ! docker run --rm rc-test; then
echo "Docker container failed with rc $?"
fi
After running your docker run command you can check this way if your docker container is still up and running:
s='^foo$'
status=$(docker ps -qf "name=$s" --format='{{.Status}}')
[[ -n $status ]] && echo "Running: $status" || echo "not running"
You just need to execute it with "-d" to execute the container in detached mode. With this, the solutions provided in the other post or the solution provided by #anubhava are both good solutions.
docker run -d -name some_name mycontainer
s='^some_name$'
status=$(docker ps -qf "name=$s" --format='{{.Status}}')
[[ -n $status ]] && echo "Running: $status" || echo "not running"

Jenkins fails with Execute shell script

I have my bash script in ${JENKINS_HOME}/scripts/convertSubt.sh
My job has build step Execute shell:
However after I run job it fails:
The error message (i.e. the 0: part) suggests, that there is an error while executing the script.
You could run the script with
sh -x convertSubt.sh
For the safe side, you could also do a
ls -l convertSubt.sh
file convertSubt.sh
before you run it.
make sure that the script exist with ls
no need to sh , just ./convertSubs.sh ( make sure you have run permissions)

Can't output result in bash from an ant command

I am writing a bash script that modifies some config files, runs "ant ear war" as a different user, outputs the return, exits back to the root to continue with the rest of the script. The issue is that the script does not continue after exiting and I don't get an output from "ant ear war".
Thank you for the help.
here is an example
#When running the bash script i don't see the output. Maybe it's because I run it as root and switched to another_user. So I tried to outputing result into a variable and into a text file. Both failed
su another_user
cd /usr/empi/MMEMPIV741/
echo $(ant ear war) >> /tmp/empi_install.txt
varant="$?"
echo 'if zero it's success otherwise it's a failure'
cp /usr/accessmgr/AMV741/bin/am/JBoss/AccessManager.war /usr/jboss/jboss-eap-4.3/jboss-as/server/default/deploy/
cp /usr/empi/MMEMPIV741/person_project/working-dir/dist/* /usr/jboss/jboss-eap-4.3/jboss-as/server/default/deploy/
exit
#By this time above is exited from another_user and should return to root
echo $varant
echo "http://`hostname`:21080/PersonMasterIndexDQM/flex/login.jsp#"
Put the commands you want to run in a different user context into a separate script and run that script via
su another_user -c /path/to/other.sh

Why do I end up with two processes?

I wrote a script that has been running as a daemon for quite some time now.
If I ever needed to debug it, I would stop the daemon version and rerun manually in current shell. I have never logged anything out of this script, but as I am getting ready to deploy it on a remote server I figured I want to log any errors that the script would get into. For that purpose I followed hints from several SO postings and am doing the following:
if ! tty > /dev/null; then
exec > >(/bin/logger -p syslog.warning -t mytag -i) 2>&1
fi
This seems to log just fine, I am just surprised to see two instances of my script listed by ps when this feature is enabled. Is there a way to avoid it?
I know I get another process for logger and I assume that it has to do with the >(...), but still hope to avoid it
bash spawns a subshell to execute the command(s) in >( ... ). In this case, the only thing that subshell does is run /bin/logger, so it's rather pointless. I think you can "fix" this with another exec command:
if ! tty > /dev/null; then
exec > >(exec /bin/logger -p syslog.warning -t mytag -i) 2>&1
fi
This doesn't prevent the subshell from starting, but then instead of running /bin/logger as a subprocess (of the subshell), the subshell gets replaced with /bin/logger. I haven't tested this with logger, but it worked fine in a quick test I did with cat and it seemed to work fine.
Look at the PPID column. (parent process), I think you'll see that the 2 processes are connected to each other.
Generally commands surounded by '( )' pairs indicate 'running-as-a-subprocess', hence 2 listings in ps because there are 2 copies of the process.
(I'm not familiar with the bash syntax exec > **${spaceChar}** >( .... ) 2>&1, meaning the '>' seperated by a space from the 2nd '>' )
What is wrong with a crontab entry?

Resources