My jenkins server has a problem of running shell commands in reverse order.
I specify the commands to run
copy over a file to another server
run the update script
For example,
$nohup scp -i .ssh/blah -o StrictHostKeyChecking=no foo.txt tomcat#foo.coo.com:/tmp/FOO.txt &> /dev/null
$nohup ssh -t -t -n -i .ssh/blah -o StrictHostKeyChecking=no tomcat#foo.coo.com '/home/tomcat/bin/update.sh /tmp/FOO.txt.war'
instead the jenkins output console would show:
running update.sh
copying over the file
the same problem also occurs when i pair the two commands into one with &&
and it happens with all my jobs on jenkins
i'm currently running jenkins 1.469 on a tomcat6 server
any help would be appreciated thanks!
EDIT:
i'm running these commands as batch tasks for each job. the problem doesn't seem to be jenkins as this ran correctly
[workspace] $ /bin/sh -xe /tmp/tomcat6-tomcat6-tmp/hudson8724999678434432030.sh
+ echo 1
1
+ echo 2
2
+ echo 3
3
+ echo 4
4
The use of &> to redirect both stdout and stderr is a feature of the bash shell. If you want to use bash-specific features, you need to let Jenkins know the build step should be executed using bash.
This can be done in two ways:
1) Change the default shell in Jenkins global configuration or
2) The first line of your build step must start with #!/bin/bash ...
Note that /bin/sh is not always a symlink to /bin/bash.
Related
I want to run these two command in a loop:
for i in cat input:
do
winpty Kubectl exec -it $i -n image -c podname -- sh
2nd command
done
When I am running the .sh file, the first command works fine and after than nothing is happening.Can anybody help on this?I am running through gitbash from windows machine
I'm a bash rookie, but maybe it's because of the lack of a defined -d directory for unzipped files?
I have 4 shell commands I need to run and they do not depend on each other.
I have 4 slave machines. So, I want to run one of the 4 commands on each of the 4 machines, and then I want to wait until all 4 of them are finished.
How do I distribute this processing? This is what I tried:
$1 is a list of ip addresses to the slave machines.
for host in $(cat $1)
do
echo $host
# ssh into each machine and launch command
ssh username#$host <command>;
done
But this seems as if it is waiting for the command to finish before moving on to the next host and launching the next command.
How do I accomplish this distributed processing that doesn't depend on each other?
I would use GNU Parallel like this - running hostname in parallel on each of 4 servers:
parallel -j 4 --nonall -S 192.168.0.1,192.168.0.2,192.168.0.3,192.168.0.4 hostname
If you need to pass parameters, use --onall and put arguments after :::
parallel -j 4 --onall -S 192.168.0.1,192.168.0.2,192.168.0.3,192.168.0.4 echo ::: hello
Add --tag if you want the output lines tagged by the hostname/IP.
Add -k if you want to keep the output in order.
Add : to the server list to run on local host too.
If you aren't concerned about how many commands run concurrently, just put each one in the background with &, then wait on them as a group.
while IFS= read -r host; do
ssh username#$host <command> &
done < "$1"
wait
Note the use of a while loop instead of a for loop; see Bash FAQ 001.
The ssh part of your script needs to be like:
$ ssh -f user#host "sh -c 'sleep 30 ; nohup ls > foo 2>&1 &'"
This one sleeps for 30 secs and writes the output of ls to file foo. 30 secs is enough for you to go and see it yourself. Just build your loop around that.
English is not my native language, please accept my apologies for any language issues.
I want to execute a script (bash / sh) through CRON, which will perform various maintenance actions, including backup. This script will execute other scripts, one for each function. And I want the entirety of what is printed to be saved in a separate file for each script executed.
The problem is that each of these other scripts executes commands like "duplicity", "certbot", "maldet", among others. The "ECHO" commands in each script are printed in the file, but the outputs of the "duplicity", "certbot" and "maldet" commands do not!
I want to avoid having to put "| tee --append" or another command on each line. But even doing this on each line, the "subscripts" do not save in the log file. That is, ideally in the parent script, you could specify in which file each script prints.
Does not work:
sudo bash /duplicityscript > /path/log
or
sudo bash /duplicityscript >> /path/log
sudo bash /duplicityscript | sudo tee –append /path/log > /dev/null
or
sudo bash /duplicityscript | sudo tee –append /path/log
Using exec (like this):
exec > >(tee -i /path/log)
sudo bash /duplicityscript
exec > >(tee -i /dev/null)`
Example:
./maincron:
sudo ./duplicityscript > /myduplicity.log
sudo ./maldetscript > /mymaldet.log
sudo ./certbotscript > /mycertbot.log
./duplicityscript:
echo "Exporting Mysql/MariaDB..."
{dump command}
echo "Exporting postgres..."
{dump command}
echo "Start duplicity data backup to server 1..."
{duplicity command}
echo "Start duplicity data backup to server 2..."
{duplicity command}
In the log file, this will print:
Exporting Mysql/MariaDB...
Exporting postgres...
Start duplicity data backup to server 1...
Start duplicity data backup to server 2...
In the example above, the "ECHO" commands in each script will be saved in the log file, but the output of the duplicity and dump commands will be printed on the screen and not on the log file.
I made a googlada, I even saw this topic, but I could not adapt it to my necessities.
There is no problem in that the output is also printed on the screen, as long as it is in its entirety, printed on the file.
try 2>&1 at the end of the line, it should help. Or run the script in sh -x mode to see what is causing the issue.
Hope this helps
I have a job in jenkins which executes many python scripts in a shell:
#!/bin/bash -x
mkdir -p $WORKSPACE/validation/regression
rm -f $WORKSPACE/validation/regression/*.latest
cd $WORKSPACE/PythonTests/src/
# Execute test cases
python tests.py 031 > $WORKSPACE/validation/regression/TP031output_b$BUILD_NUMBER.log
python tests.py 052 > $WORKSPACE/validation/regression/TP052output_b$BUILD_NUMBER.log
python tests.py 060 > $WORKSPACE/validation/regression/TP060output_b$BUILD_NUMBER.log
My intention is that each script output (which I can see in my terminal if I execute them manually) is stored in a log file with that classic redirection.
It used to work, but now it just creates an empty file. I can't find out what has changed since then.
Any hint?
This worked for me in Jenkins Build execute shell:
[command] 2>&1 | tee [outputfile]
During the provisioning of a VM I want to start a job which shall run in the background. This job shall continuously check whether certain files have been changed. In the vagrant file I reference a script which contains the following line (which does nothing but echo "x" every 3 seconds):
nohup sh -c 'while true; do sleep 3; echo x; done' &
If I run this directly in the command line a job is created, which I can check using jobs.
If I however run it from outside the VM using
vagrant ssh -c "nohup sh -c 'while true; do sleep 3; echo x; done' &"
or if it is executed as part of the provisioning nothing seems to happen. (There is no job & no nohup.out file was created.)
I tried the following two answers to questions which seem to address the same issue:
(1) This answer suggests to "properly daemonize" which didn't work for me. I tried the following:
vagrant ssh -c "nohup sh -c 'while true; do sleep 3; echo x; done' 0<&- &>/dev/null &"
(2) The second answer says to add "sleep 1" which didn't work either:
vagrant ssh -c "nohup sh -c 'while true; do sleep 3; echo x; done' & sleep 1"
For both attempts directly executing the command on the command line worked just fine however executing it via vagrant ssh -c or by provisioning didn't seem to do anything.
This is how it works in my case
Vagrantfile provisioning
hub.vm.provision "shell", path: "script/run-test.sh", privileged: false, run: 'always', args: "#{selenium_version}"
I call a run-test script to be run as vagrant user (is privileged: false)
The interesting part of the script is
nohup java -jar /test/selenium-server-standalone-$1.jar -role hub &> /home/vagrant/nohup.grid.out&
in my case I start a java daemon and I redirect the output of nohup in a specific file in my vagrant home. If I check the job is running and owned by vagrant user.
For me worked running commands in screen like:
screen -dm bash -c "my_cmd"
in provision shell scripts.