I am trying to get alias's setup so that they print out the command, then run the command.
Ex:
> alias ls='ls -alh'
> ls
Running "ls -alh"
total 1.8G
drwxr-x--- 36 root root 4.0K Apr 23 09:44 ./
drwxr-xr-x 28 root root 4.0K Mar 6 17:24 ../
Is this possible? I was thinking of using a wrapper function, but I am unsure as to how one would acomplish this.
Thanks!
Just add an echo command in your alias before the actual command:
alias ls='echo "Running ls -alh"; ls -alh'
alias ls='echo "Running ls -alh" && ls -alh'
This runs two commands one after the other. The first command is echo "Running ls -alh", the && checks the return value of the echo command, if that's 0, then the command ls -alh is run. However, if for some reason there is a problem with the echo command and its return value is not 0 then the ls command won't be run.
The && command can come in very handy when writing scripts to run one command only when another is successful.
Related
I have a scenario to automate the manual build update process via shell script on multiple VM nodes.
For the same, I am trying the below sample script to first ssh into the instance and then switch to root user to perform the further steps like copying the build to archives directory under /var and then proceed with the later steps.
Below is the sample script,
#!/bin/sh
publicKey='/path/to/publickey'
buildVersion='deb9.deb build'
buildPathToStore='/var/cache/apt/archives/'
pathToHomedir='/home'
script="whoami && pwd && ls -la && whoami && mv ${buildVersion} ${buildPathToStore} && find ${buildPathToStore} | grep deb9"
for var in "$#"
do
copyBuildPath="${publicKey} ${buildVersion} ${var}:/home/admin/"
echo "copy build ==>" ${copyBuildPath}
scp -r -i ${copyBuildPath}
ssh -i $publicKey -t $var "sudo su - & ${script}; " # This shall execute all commands as root
done
So the CLI stats for the above script are something like this
admin //this is the user check
/home/admin
total 48
drwxr-xr-x 6 admin admin 4096 Dec 6 00:28 .
drwxr-xr-x 6 root root 4096 Nov 17 14:07 ..
drwxr-xr-x 3 admin admin 4096 Nov 17 14:00 .ansible
drwx------ 2 admin admin 4096 Nov 23 18:26 .appdata
-rw------- 1 admin admin 5002 Dec 6 17:47 .bash_history
-rw-r--r-- 1 admin admin 220 May 16 2017 .bash_logout
-rw-r--r-- 1 admin admin 3506 Jun 14 2019 .bashrc
-rw-r--r-- 1 admin admin 675 May 16 2017 .profile
drwx------ 4 admin admin 4096 Nov 23 18:26 .registry
drwx------ 2 admin admin 4096 Jun 21 2019 .ssh
-rw-r--r-- 1 admin admin 0 Dec 6 19:42 testFile.txt
-rw------- 1 admin admin 2236 Jun 21 2019 .viminfo
admin
If I use sudo su -c and remove &
like:
ssh -i $publicKey -t $var "sudo su -c ${script}; "
Then for once whoami returns the user as root but the working directory still prints as /home/admin instead of /root
And the next set of commands are still accounted for admin user rather than the root. So the admin user do not have the privileges to move the build to archive directory and install the build.
Using & I want to ensure that the further steps are being done in the background.
Not sure how to proceed ahead with this. Good suggestions are most welcome right now :)
"sudo su - & ${script}; "
expands to:
sudo su - & whoami && pwd && ...
First sudo su - is run in the background. Then the command chain is executed.
sudo su -c ${script};
expands to:
sudo su -c whoami && pwd && ...
So first sudo su - whoami is executed, which runs whoami as root. Then if this command is successful, then pwd is executed. As normal user.
It is utterly hard to correctly pass commands to execute on remote site using ssh. It is increasingly hard to do it with sudo su - the command will be triple (or twice?) word splitted - one time by ssh, then by the shell, then by the shell run by sudo su.
If you do not need interactive communication, it's best to use a here document with -s shell option, something along (untested):
# DO NOT store commands to use in a variable.
# or if you do and you know what you are doing, properly quote it (printf "%q ") and run it via eval
script() {
set -euo pipefail
whoami
pwd
ls -la
whoami
mv "$buildVersion" "$buildPathToStore"
find "$buildPathToStore" | grep deb9
}
ssh ... "sudo bash -s" <<EOF
echo "Yay! anything here!"
echo "Note that here document delimiter is not quoted!"
$(
# safely import context to work with
# note how command substitution is executed on host side
declare -f script
# pass variables too!
declare -p buildVersion buildPathToStore buildPathToStore
)
script
EOF
When you use su alone it keeps you in your actual directory, if you use su - it simulates the root login.
You should write : su - root -c ${script};
My goal is to have Jenkins 2 execute alpha integration tests between an express js app and a postgres db. I am to spin up containerized resources locally and test successfully with bash scripts that employ docker-compose. The relevant bash script is scripts/docker/dockerRunTest.sh.
However, when I try to do the same thing via Jenkins, Jenkins claims that the initiating script is not found.
Jenkinsfile
stage('Alpha Integration Tests') {
agent {
docker {
image 'tmaier/docker-compose'
args '-u root -v /var/run/docker.sock:/var/run/docker.sock --network host'
}
}
steps {
sh 'ls -lah ./scripts/docker/'
sh './scripts/docker/dockerRunTest.sh'
}
}
Output
+ ls -lah ./scripts/docker/
total 36
drwxr-xr-x 2 root root 4.0K Jan 26 21:31 .
drwxr-xr-x 6 root root 4.0K Jan 26 20:54 ..
-rwxr-xr-x 1 root root 2.2K Jan 26 21:31 docker.lib.sh
-rwxr-xr-x 1 root root 282 Jan 26 21:31 dockerBuildApp.sh
-rwxr-xr-x 1 root root 289 Jan 26 21:31 dockerBuildTestRunner.sh
-rwxr-xr-x 1 root root 322 Jan 26 21:31 dockerDown.sh
-rw-r--r-- 1 root root 288 Jan 26 21:31 dockerRestart.sh
-rwxr-xr-x 1 root root 482 Jan 26 21:31 dockerRunTest.sh
-rwxr-xr-x 1 root root 284 Jan 26 21:31 dockerUp.sh
+ ./scripts/docker/dockerRunTest.sh
/var/jenkins_home/workspace/project-name#2#tmp/durable-9ac0d23a/script.sh: line 1: ./scripts/docker/dockerRunTest.sh: not found
ERROR: script returned exit code 127
The file clearly exists per the ls output. I have some hazy idea that there may be some conflict between how shell scripts and bash scripts work, but I cannot quite grasp the nuance in how Jenkins is not able to execute a script that clearly exists.
edit (including script contents):
dockerRunTest.sh
#!/bin/bash
MY_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd -P )"
MY_DIR="${MY_DIR:?}"
SCRIPTS_DIR="$(realpath "${MY_DIR}/..")"
ROOT_DIR="$(realpath "${SCRIPTS_DIR}/..")"
TEST_DIR="${ROOT_DIR}/test/integration"
SRC_DIR="${ROOT_DIR}/src"
REPORTS_DIR="${ROOT_DIR}/reports"
. "${SCRIPTS_DIR}/docker/docker.lib.sh"
dockerComposeUp
dockerExecuteTestRunner
dockerComposeDown
docker.lib.sh
#!/bin/bash
CURRENT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd -P )"
CURRENT_DIR="${CURRENT_DIR:?}"
SCRIPTS_DIR="$(realpath "${CURRENT_DIR}/..")"
ROOT_DIR="$(realpath "${SCRIPTS_DIR}/..")"
. "${SCRIPTS_DIR}/lib.sh"
dockerComposeUp() {
docker-compose build --no-cache
docker-compose up --detach --force-recreate
DC_CODE=$?
if [ ${DC_CODE} -ne 0 ]; then
# Introspection
docker-compose logs
docker-compose ps
exit ${DC_CODE}
fi
}
dockerComposeDown() {
# docker-compose rm: Removes stopped service containers.
# -f, --force - Don't ask to confirm removal.
# -s, --stop - Stop the containers, if required, before removing.
# -v - Remove any anonymous volumes attached to containers.
docker-compose rm --force --stop -v
}
dockerComposeRestart() {
dockerComposeDown
dockerComposeUp
}
dockerBuildTestRunner() {
docker build -f test/Dockerfile -t kwhitejr/botw-test-runner .
}
dockerExecuteTestRunner() {
IMAGE_NAME="kwhitejr/botw-test-runner"
echo "Build new ${IMAGE_NAME} image..."
dockerBuildTestRunner
echo "Run ${IMAGE_NAME} executable test container..."
docker run -it --rm --network container:api_of_the_wild_app_1 kwhitejr/botw-test-runner
}
tmaier/docker-compose image doesn't have /bin/bash interpreter installed by default since latest tag is an alpine image [1, 2]. This can be confirmed by running:
$ docker run -it --rm tmaier/docker-compose bash
/usr/local/bin/docker-entrypoint.sh: exec: line 35: bash: not found
To get the script working, either install bash in the docker image using apk add bash or change the shebang to #!/bin/sh if the script can be run using ash shell (the default shell in busybox).
[1] https://github.com/tmaier/docker-compose/blob/b740feb61fb25030101638800a609605cfd5e96a/Dockerfile#L2
[2] https://github.com/docker-library/docker/blob/d94b9832f55143f49e47d00de63589ed41f288e7/18.09/Dockerfile#L1
I have the similar issue but in my case, it is because the shell script file has EOL in Windows format (if you open the file in the terminal using vi, you will see each line ends with ^M)
I can fix this using Notepad++ Edit -> EOL Conversion -> Unix (LF)
So I have folder aa
$ mkdir aa
and path expansion for ls command works like this:
$ ls -la a*
total 0
drwxr-xr-x 1 a a 0 Mar 29 08:41 ./
drwxr-xr-x 1 a a 0 Dec 31 1979 ../
$ ls -la a?
total 0
drwxr-xr-x 1 a a 0 Mar 29 08:41 ./
drwxr-xr-x 1 a a 0 Dec 31 1979 ../
But "the same" for mkdir shows an error:
$ mkdir a*/bb
mkdir: cannot create directory 'a*/bb': No such file or directory
$ mkdir a?/bb
mkdir: cannot create directory 'a?/bb': No such file or directory
Where can I read why this difference in behavior happens and is there simple trick to let mkdir be "smarter" for behavior like in ls?
This does not work, since wildcard expansion is done before the argument is passed to mkdir. bash tries to expand a*/bb, doesn't find a match and tells you so. mkdir is not even invoked here. You can also try e.g.
echo a*/bb
or as you did before
ls -la a*/bb
Both commands will give you the same error message.
Now I realize how stupid that question was. Probably I wanted something like this for expansion to work:
mkdir "$(ls -d a?)"/bb
Try:
mkdir -p a*/aa
mkdir -p a?/aa
I am trying to chown a home directory test for an bash script. I need this functionality because of syncthing which is not syncing the ownerships.
#!/bin/bash
user=test
"chown $user:$user /home/$user"
When I use the above script, I get a message "test.sh: line 5: chown test:test ~/home/test/: No such file or directory
"
Output of
ls -l /home/ |grep test
drwx------ 5 pwresettest 1005 121 2. Nov 04:23 pwresettest
drwx------ 14 test 1001 4096 29. Okt 05:41 test
When I am using the command on the commandline, it works without problems.
Did I do something wrong?
The shell treats the quoted string as a single word to as the name of the command, rather than a command name followed by arguments. Simply take off the quotes you've added in your script:
#!/bin/bash
user=test
chown $user:$user /home/$user
When you use chown on the command line you aren't quoting the entire command. Don't do that in the script either. – Etan Reisner
I have two shell scripts .
(working one)
$ cat script_nas.sh
#!/bin/bash
for i in `cat nas_filers`
do echo $i
touch /mnt/config-backup/nas_backup/$i.auditlog.0.$(date '+%Y%m%d')
ssh -o ConnectTimeout=5 root#$i rdfile /etc/configs/config_saved > /mnt/config-backup/nas_backup/$i.auditlog.0.$(date '+%Y%m%d')
done
other
(not working one)
$ cat script_san.sh
#!/bin/bash
for i in `cat san_filers`
do echo $i
touch /mnt/config-backup/san_backup/$i.auditlog.0.$(date '+%Y%m%d')
ssh -o ConnectTimeout=5 root#$i rdfile /etc/configs/config_saved > /mnt/config-backup/san_backup/$i.auditlog.0.$(date '+%Y%m%d')
done
Cron entries are:
$ crontab -l
Filers config save script
0 0 * * * /mnt/config-backup/script_san.sh
0 0 * * * /mnt/config-backup/script_nas.sh
0 0 * * * /mnt/config-backup/delete_file
Script script_san.sh is not working.
Outputs are like
SAN backup directory
san_backup]# ls -lart alln01-na-exch01a.cisco.com.auditlog*
-rw-r--r-- 1 root root 210083 Mar 1 22:24 alln01-na-exch01a.auditlog.0.20150301
[root#XXXXX san_backup]# pwd
/mnt/config-backup/san_backup
NAS backup directory
nas_backup]# ls -lart rcdn9-25f-filer43b.cisco.com.auditlog*
-rw-r--r-- 1 root root 278730 Feb 26 00:06 rcdn9-25f-filer43b.cisco.com.auditlog.0.20150226
-rw-r--r-- 1 root root 281612 Feb 27 00:17 rcdn9-25f-filer43b.cisco.com.auditlog.0.20150227
-rw-r--r-- 1 root root 284105 Feb 28 00:02 rcdn9-25f-filer43b.cisco.com.auditlog.0.20150228
-rw-r--r-- 1 root root 284101 Mar 1 00:02 rcdn9-25f-filer43b.cisco.com.auditlog.0.20150301
[root#XXXXXXX nas_backup]#
From cron logs I can see that cron is executing both the script but output for script_san.sh is not coming.
From my experience, most of the times script is working manually but not from crontab is because login scripts were not running. Try to add something like source ~/.bash_profile in the begging of script or first line in cron file. Did you try (for debugging) to run the script with at command?