Azure Pipelines Shell Script task not executing - bash

I'm trying to execute a shell script that modifies some files in my source code as part of a build pipeline. The build runs on a private linux agent.
So I'm using the shell script task (I've also tried an inline bash task), my yaml looks like this:
- task: ShellScript#2
inputs:
scriptPath: analytics/set-base-image.bash
args: $(analyticsBaseImage)
failOnStandardError: true
And set-base-image.bash:
#!/bin/bash
sudo mkdir testDir
sudo sed -i -e "s/##branchBaseImagePlaceholder##/$1/g" Dockerfile-shiny
sudo sed -i -e "s/##branchBaseImagePlaceholder##/$1/g" Dockerfile-plumber
But nothing happens. I get debug output that looks like this:
##[debug]/bin/bash arg: /datadrive/agent1/_work/1/s/analytics/set-base-image.bash
##[debug]args=analytics-base
##[debug]/bin/bash arg: analytics-base
##[debug]failOnStandardError=true
##[debug]exec tool: /bin/bash
##[debug]Arguments:
##[debug] /datadrive/agent1/_work/1/s/analytics/set-base-image.bash
##[debug] analytics-base
[command]/bin/bash /datadrive/agent1/_work/1/s/analytics/set-base-image.bash analytics-base
/datadrive/agent1/_work/1/s/analytics
##[debug]rc:0
##[debug]success:true
##[debug]task result: Succeeded
##[debug]Processed: ##vso[task.complete result=Succeeded;]Bash exited with return code: 0
testDir isn't created and the files aren't modified.
The script runs fine if I log onto the agent machine and run it there (after running chmod +x on the script file).
I've also tried an inline Bash task instead of a shell task (what the difference is isn't obvious anyway).
If I add commands to the script that don't require any privileges, like echo and pwd, these run fine, and I see the results in the debug. But mkdir and sed don't.

Related

Ansible: Why won't this script log when run through ansible?

At the start of a script I have:
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>patch_log.out 2>&1
(From https://serverfault.com/questions/103501/how-can-i-fully-log-all-bash-scripts-actions)
which when the script is run in the terminal it gives the log file patch_log.out I expect
but when running the script from ansible using the shell module it does not (yet I know the rest of the script works correctly)
I imagine it is something to do with my understanding of how exec works, and how I could get it to work through ansible
Running the script in Ansible
Needed to pass argument to ensure using bash (not sh) (thanks to #U880D)
Make sure to set destination directory
So in playbook where I run the script:
args:
executable:
/bin/bash
chdir:
/home/user/directory

Execute symfony command in bash script

I can't get to execute symfony command in bash script when I run it in cron.
When I execute the .sh script by hand everything is working fine.
in my bash file the command is executed like this:
/usr/bin/php -q /var/www/pww24/bin/console pww24:import asari $office > /dev/null
I run the scripts from root, the cron is set to root as well. For the test i set files permissions to 777 and added +x for execution.
the bash script executes fine. It acts like it's skipping the command but from logs i can see that the code is executed
It turned out that symfony system variables that I have stored on server are not enough. When you start to execute the command from command line its fine, but when using Cron you need them in .env file. Turned out that in the proces of countinous integrations I only got .env.dist file and I've to make the .env file anyways.
Additionaly I've added two lines to cron:
PATH=~/bin:/usr/bin/:/bin
SHELL=/bin/bash
and run my command like this from the bash file:
sudo /usr/bin/php -q /var/www/pww24/bin/console pww24:import asari $office > /dev/null

Running a bash script from alpine based docker

I have Dockerfile containing:
FROM alpine
COPY script.sh /script.sh
CMD ["./script.sh"]
and a script.sh (with executable permission):
#!/bin/bash
echo "hello world from script file"
when I run
docker run --name testing fff0e5c81ca0
where fff0e5c81ca0 is the id after building, I get an error
standard_init_linux.go:195: exec user process caused "no such file or directory"
So how can I solve it?
To run a bash script in alpine based image, you need to do either one
Install bash
$ RUN apk add --update bash
Use #!/bin/sh in script instead of #!/bin/bash
You need to do any one of these two or both
Or, like #Maroun's answer in comment, you can change your CMD to execute your bash script
CMD ["sh", "./script.sh"]
Your Dockerfile may look like this:
FROM openjdk:8u171-jre-alpine3.8
COPY script.sh /script.sh
CMD ["sh", "./script.sh"]

Docker exec - Write text to file in container

I want to write a line of text to a textfile INSIDE a running docker container. Here's what I've tried so far:
docker exec -d app_$i eval echo "server.url=$server_url" >> /home/app/.app/app.config
Response:
/home/user/.app/app.config: No such file or directory
Second try:
cfg_add="echo 'server.url=$server_url' >> /home/user/.app/app.config"
docker exec -i app_$i eval $cfg_add
Response:
exec: "eval": executable file not found in $PATH
Any ideas?
eval is a shell builtin, whereas docker exec requires an external utility to be called, so using eval is not an option.
Instead, invoke a shell executable in the container (bash) explicitly, and pass it the command to execute as a string, via its -c option:
docker exec "app_$i" bash -c "echo 'server.url=$server_url' >> /home/app/.app/app.config"
By using a double-quoted string to pass to bash -c, you ensure that the current shell performs string interpolation first, whereas the container's bash instance then sees the expanded result as a literal, as part of the embedded single-quoted string.
As for your symptoms:
/home/user/.app/app.config: No such file or directory was reported, because the redirection you intended to happen in the container actually happened in your host's shell - and because dir. /home/user/.app apparently doesn't exist in your host's filesystem, the command failed fundamentally, before your host's shell even attempted to execute the command (bash will abort command execution if an output redirection cannot be performed).
Thus, even though your first command also contained eval, its use didn't surface as a problem until your second command, which actually did get executed.
exec: "eval": executable file not found in $PATH happened, because, as stated, eval is not an external utility, but a shell builtin, and docker exec can only execute external utilities.
Additionally:
If you need to write text from outside the container, this also works:
(docker exec -i container sh -c "cat > c.sql") < c.sql
This will pipe you input into the container. Of course, this would also work for plain text (no file). It is important to leave off the -t parameter.
See https://github.com/docker/docker/pull/9537
UPDATE (in case you just need to copy files, not parts of files):
Docker v17.03 has docker cp which copies between the local fs and the container: https://docs.docker.com/engine/reference/commandline/cp/#usage
try to use heredoc:
(docker exec -i container sh -c "cat > /test/iplist") << EOF
10.99.154.146
10.99.189.247
10.99.189.250
EOF

Bash script: Turn on errors?

After designing a simple shell/bash based backup script on my Ubuntu engine and making it work, I've uploaded it to my Debian server, which outputs a number of errors while executing it.
What can I do to turn on "error handling" in my Ubuntu machine to make it easier to debug?
ssh into the server
run the script by hand with either -v or -x or both
try to duplicate the user, group, and environment of the error run in your terminal window If necessary, run the program with something like "su -c 'sh -v script' otheruser
You might also want to pipe the result of the bad command, particularly if run by cron(8), into /bin/logger, perhaps something like:
sh -v -x badscript 2>&1 | /bin/logger -t badscript
and then go look at /var/log/messages.
Bash lets you turn on debugging selectively, or completely with the set command. Here is a good reference on how to debug bash scripts.
The command set -x will turn on debugging anywhere in your script. Likewise, set +x will turn it off again. This is useful if you only want to see debug output from parts of your script.
Change your shebang line to include the trace option:
#!/bin/bash -x
You can also have Bash scan the file for errors without running it:
$ bash -n scriptname

Resources