Ansible: Why won't this script log when run through ansible? - bash

At the start of a script I have:
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>patch_log.out 2>&1
(From https://serverfault.com/questions/103501/how-can-i-fully-log-all-bash-scripts-actions)
which when the script is run in the terminal it gives the log file patch_log.out I expect
but when running the script from ansible using the shell module it does not (yet I know the rest of the script works correctly)
I imagine it is something to do with my understanding of how exec works, and how I could get it to work through ansible

Running the script in Ansible
Needed to pass argument to ensure using bash (not sh) (thanks to #U880D)
Make sure to set destination directory
So in playbook where I run the script:
args:
executable:
/bin/bash
chdir:
/home/user/directory

Related

Launch bash subshell and run commands in a script

I want to write a shell script that does the following:
Activate pipenv virtual environment
Runs mkdocs serve which starts a local dev server for my mkdocs documentation
If I do the naïve thing and put this in my script:
cd <my-docs-directory>
pipenv shell
mkdocs serve
it fails because pipenv shell "launches a subshell in the virtual environment". I need to pass the mkdocs serve command into the virtual shell (and preferably land in that same shell after running the script ).
Thanks in advance!
Answer
Philippe's answer works. Here's why.
pipenv run bash -c 'mkdocs serve ; exec bash --norc'
Pipenv allows you to run a command in the virtual environment without launching a shell:
$ pipenv run <insert command here>
bash -c <insert command here> allows you to pass a command to bash to execute
$ bash -c "echo hello"
hello
exec serves to replace current shell process with a command, so that parent goes a way and child owns pid. Here's a related question on AskUbuntu.
You can use this command :
pipenv run bash -c 'mkdocs serve ; exec bash --norc'

Azure Pipelines Shell Script task not executing

I'm trying to execute a shell script that modifies some files in my source code as part of a build pipeline. The build runs on a private linux agent.
So I'm using the shell script task (I've also tried an inline bash task), my yaml looks like this:
- task: ShellScript#2
inputs:
scriptPath: analytics/set-base-image.bash
args: $(analyticsBaseImage)
failOnStandardError: true
And set-base-image.bash:
#!/bin/bash
sudo mkdir testDir
sudo sed -i -e "s/##branchBaseImagePlaceholder##/$1/g" Dockerfile-shiny
sudo sed -i -e "s/##branchBaseImagePlaceholder##/$1/g" Dockerfile-plumber
But nothing happens. I get debug output that looks like this:
##[debug]/bin/bash arg: /datadrive/agent1/_work/1/s/analytics/set-base-image.bash
##[debug]args=analytics-base
##[debug]/bin/bash arg: analytics-base
##[debug]failOnStandardError=true
##[debug]exec tool: /bin/bash
##[debug]Arguments:
##[debug] /datadrive/agent1/_work/1/s/analytics/set-base-image.bash
##[debug] analytics-base
[command]/bin/bash /datadrive/agent1/_work/1/s/analytics/set-base-image.bash analytics-base
/datadrive/agent1/_work/1/s/analytics
##[debug]rc:0
##[debug]success:true
##[debug]task result: Succeeded
##[debug]Processed: ##vso[task.complete result=Succeeded;]Bash exited with return code: 0
testDir isn't created and the files aren't modified.
The script runs fine if I log onto the agent machine and run it there (after running chmod +x on the script file).
I've also tried an inline Bash task instead of a shell task (what the difference is isn't obvious anyway).
If I add commands to the script that don't require any privileges, like echo and pwd, these run fine, and I see the results in the debug. But mkdir and sed don't.

How to switch users with script to run another script

I have a script that will be executed as root, part way through the script I would like to switch to a user (say, bob) and execute another script using that user's environment. At the end of the script I want to switch back to root and execute more commands. I would like to run this script without having to enter the password for bob.
This script will be provided to my AWS EC2 instance via the user-data feature at first time bootup.
I thought the way to do this was to use either sudo or su. However, I don't appear to have access to bob's environment with either of these methods.
In the stdout echo below, you'll see that the environment variable myvar is initialized to Inara but when this script is executed with sudo, that value is unset....
dave#bugbear:~/workspaces/sandbox$ su --login bob
Password:
bob#bugbear:~$ cat bin/echo.sh
#!/bin/bash
echo "In echo.sh.. myvar is {$myvar}"
echo "Now executing the ruby script"
. ~/.bashrc
~/bin/echo.rb
bob#bugbear:~$ cat bin/echo.rb
#!/usr/bin/env ruby
puts "$myvar is: #{ENV['myvar']}"
bob#bugbear:~$ bin/echo.sh
In echo.sh.. myvar is {Inara}
Now executing the ruby script
$myvar is: Inara
bob#bugbear:~$ exit
logout
dave#bugbear:~/workspaces/sandbox$ cat test.sh
#!/bin/bash
stty echo
sudo --login -u bob bin/echo.sh
dave#bugbear:~/workspaces/sandbox$ ./test.sh
In echo.sh.. myvar is {}
Now executing the ruby script
$myvar is:
You are probably looking for one of these:
Simulate the -i initial environment of -u user bob:
sudo -i -u bob [command]
Or, use sudo to gain the required privilege to use su and ask it to - start a login shell as bob (without the bare - you're not doing that) and -c run a command:
sudo su - bob -c [command]

ansible throwing error on script execution

I am trying to run the script (foo) which is present under my home directory (/home/ubuntu) using ansible.
if i manually move to /home/ubuntu and run the script as below
./foo --arg1=aaa --arg2=xxx --arg3=yyy
the script is working fine in command line.
However, when i try to run the same script using ansible as below
- name: Running Config Script
command: chdir=/home/ubuntu ./foo --arg1=aaa --arg2=xxx --arg3=yyy
The script is failing . And i also tried using script tag instead of command. Its not working .
I didn't tested but please try using shell instead of command
- name: Running Config Script
shell: ./foo --arg1=aaa --arg2=xxx --arg3=yyy
args:
chdir: /home/ubuntu

Excecuting script running ssh commands in the background

I'm trying to execute this script on a remote server with requiretty enabled in the sudoers file.
#!/bin/bash
value=$(ssh -tt localhost sudo bash -c hostname)
echo $value
If I run the script using $ ./sample.sh & it stays stopped in the background. Only by using fg I can force the script to run. I think the problem is the missing tty for the output, but what can I do?
... what can I do?
You can stty -tostop.

Resources