Using Ansible, what is preferable way to check the presence of particular command in user's PATH? - ansible

Let say you want to determine whether the user 'abc' has the command 'command_abc' in user's PATH. What's the best way to do that kind of check?
Is there anything better than to just use shell module and execute something like
sudo su abc which command_abc && echo 'ok'

[edit]
The only direct way I'm aware of is yours. But I would use become_user statements and the command module.
Hope that helps.
[old answer, but not the point of your question]
I would check if the path to the binary is set for him in .bashrc or .profile with lineinfile. After that a check for the right permissions on the binary should be enough, probably with the stat or file module. That way you have tested it too, but without the need of shell scripts.

Related

pexpect kind of operation with ansible

I am looking to automate an interactive install process with ansible. This install does not have a silent install option or does not take command line arguments for the interactive questions. The question involve setting a folder location, making sure folder location is right etc for which answers might be default or custom.
I looked into the expect module of ansible but seems like it does not solve my purpose.
- expect:
command: passwd username
responses:
(?i)password: "MySekretPa$$word"
I don't need the command but it's required. Instead I am looking for something that could regex Are you sure you want to continue [y|n]? [n]: for which I want to send the default out By sending return or typing n as a response and for example Backup directory [/tmp] for which the response would be Carriage return.
I don't need the command but it's required. Instead I am looking for something that could regex Are you sure you want to continue [y|n]? [n]:
The module requires a command because you have to run something to get any output.
You obviously do have a command in mind, because you've run it manually and seen the output it produces. That's what you should be plugging into the module.
Alternatively, you can write a pexpect script yourself and use the command or shell modules to run it.
I've figured out a way that works for me. I piped in the arguments to the shell script which when run manually needs the answers. Like ./shell.sh <<< 'answer1\nanswer2\n' which works perfectly for me. This I have added to the task.

Using variables between files in shell / bash scripting

This question has been posted here many times, but it never seems to answer my question.
I have two scripts. The first one contains one or multiple variables, the second script needs those variables. The second script also needs to be able to change the variables in the first script.
I'm not interested in sourcing (where the first script containing the variables runs the second script) or exporting (using environment variables). I just simply want to make sure that the second script can read and change (get and set) the variables available in the first script.
(PS. If I misunderstood how sourcing or exporting works, and it applies to my scenario, please let me know. I'm not completely closed to those methods, after what I've read, I just don't think those things will do what I want)
Environment variables are per process. One process can not modify the variables in another. What you're asking for is not possible.
The usual workaround for scripts is sourcing, which works by running both scripts in the same shell process, but you say you don't want to do that.
I've also given this some thought. I would use files as variables. For example in script 1 you use for writing variable values to files:
echo $varnum1 > /home/username/scriptdir/vars/varnum1
echo $varnum2 > /home/username/scriptdir/vars/varnum2
And in script 2 you use for reading values from files back into variables:
$varnum1=$(cat /home/username/scriptdir/vars/varnum1)
$varnum2=$(cat /home/username/scriptdir/vars/varnum2)
Both scripts can read or write to the variables at any given time. Theoretically two scripts can try to access the same file at the same time, I'm not sure what exactly would happen but since each file only contains one value, the time to read or write should be extremely short.
In order to even reduce those times you can use a ramdisk.
I think this is much better than scripts editing each other (yuk!). Live editing of scripts can mess up scripts and only works when you initiate the script again after the edit was made.
Good luck!
So after a long search on the web and a lot of trying, I finally found some kind of a solution. Actually, it's quite simple.
There are some prerequisites though.
The variable you want to set already has to exist in the file you're trying to set it in (I'm guessing the variable can be created as well when it doesn't exist yet, but that's not what I'm going for here).
The file you're trying to set the variable in has to exist (obviously. I'm guessing again this can be done as well, but again, not what I'm going for).
Write
sudo sed -i 's/^\(VARNAME=\).*/\1VALUE/' FILENAME
So i.e. setting the variable called Var1 to the value 5, in the file
test.ini:
sudo sed -i 's/^\(Var1=\).*/\15/' test.ini
Read
sudo grep -Po '(?<=VARNAME=).*' FILENAME
So i.e. reading the variable called Var1 from the file test.ini
sudo grep -Po '(?<=Var1=).*' test.ini
Just to be sure
I've noticed some issues when running the script that sets variables from a different folder than the one where your script is located.
To make sure this always go right, you can do one of two things:
sudo sed -i 's/^\(VARNAME=\).*/\1VALUE/' `dirname $0`/FILENAME
So basically, just put `dirname $0`/ (including the backticks) in front of the filename.
The other option is to make `dirname $0`/ a variable (again including the backticks), which would look like this.
my_dir=`dirname $0`
sudo sed -i 's/^\(VARNAME=\).*/\1VALUE/' $my_dir/FILENAME
So basically, if you've got a file named test.ini, which contains this line: Var1= (In my tests, the variable can start empty, and you will still be able to set it. Mileage may vary.), you will be able to set and get the value for Var1
I can confirm that this works (for me), but since you all, with way more experience in scripting then me, didn't come up with this, I'm guessing this is not a great way to do it.
Also, I couldn't tell you the first thing about what's happening in those commands above, I only know they work.
So if I'm doing something stupid, or if you can explain to me what's happening in the commands above, please let me know. I'm very curious to find out what you guys think if this solution.

Find out where an environment variable was last set in bash

Okay I know there is a bash debugger. But what I'm seeking is if I had an environment variable in one of my startup scripts and I don't know how it was set or where it might be, is there a way to find it other than exhaustively searching the scripts?
I mean is there a mechanism/tool that provides such a thing? Does bash keep track of variable setting locations?
Even though this might not seem very important but it crossed my mind the other day when I was helping a friend install OpenCL and the package supposedly set the variable $ATISTREAMSDKROOT automatically. Anyway the package was supposed to add a file to /etc/profile.d to allow for setting the variable, but it didn't. And luckily the variable came out blank.
But I was wondering if it hadn't come out blank, and the package added it to some random file, I would have probably had no way of telling where it is other than looking for it.
Of course I know one could write a sed command or two and search through the scripts but I'd consider that exhaustive search :D
One option would be to start an instance of bash with:
bash -x
... and look for where the variable is set in that output. To redirect that output to a file, you could do:
bash -x -ls -c "exit" 2> shell-startup-output
You should see in the output where each file is sourced.

how to invoke ruby script containing system command with cron job?

I have a ruby script containing system command like http://gist.github.com/235833, while I ran this script from shell, it works correctly, but when I added it to my cron job list, it doesn't work any more, the cron job is like:
10/* * * * * cd /home/hekin; /usr/bin/ruby my_script.rb
any idea what's going wrong with what i've done? Thank you.
Thank you all for your answers.
It's my mistake.
Since I'm using ssh key forwarding on the local machine, while I executed the script from the shell, the ssh key forwarding related environment variables are all sitting there, but from cron job context, those environment variables are missing.
Try to separate the things that might go wrong. The ones I can think of are:
The cron syntax - is the time value given legal and fitting your shell?
Permissions - execute permissions and read permissions for the relevant directory and file
Quoting - what scope does cron cover? Does it run only the first command?
In order to dissect this, I suggest you first run a really simple cron job, like 'ls'. Next run a single-liner script. Next embed your commands in a shell-script file. Somewhere along these lines you should find the problem.
The problem is your environment. While testing in your shell its fully equipped and boosted by your shell environment. While running under cron its very, very stripped down.
Where is the destination "." for your script? I guess it will be "/" and may not "$HOME" thus your script won't be able to write at that location and fails. Try using an absolut path for the destination.

chroot + execvp + bash

Update
Got it! See my solution (fifth comment)
Here is my problem:
I have created a small binary called "jail" and in /etc/password I have made it the default shell for a test user.
Here is the -- simplified -- source code:
#define HOME "/home/user"
#define SHELL "/bin/bash"
...
if(chdir(HOME) || chroot(HOME)) return -1;
...
char *shellargv[] = { SHELL, "-login", "-rcfile", "/bin/myscript", 0 };
execvp(SHELL, shellargv);
Well, no matter how hard I try, it seems that, when my test user logs in, /bin/myscript will never be sourced. Similarly, if I drop a .bashrc file in user's home directory, it will be ignored as well.
Why would bash snob these guys?
--
Some precisions, not necessarily relevant, but to clear out some of the points made in the comments:
The 'jail' binary is actually suid, thus allowing it to chroot() successfully.
I have used 'ln' to make the appropriate binaries available - my jail cell is nicely padded :)
The issue does not seem to be with chrooting the user...something else is remiss.
As Jason C says, the exec'ed shell isn't interactive.
His solution will force the shell to be interactive if it accepts -i to mean that (and bash does):
char *shellargv[] = { SHELL, "-i", "-login", ... };
execvp(SHELL, shellargv);
I want to add, though, that traditionally a shell will act as a login shell if ARGV[0] begins with a dash.
char *shellargv[] = {"-"SHELL, "-i", ...};
execvp(SHELL, shellargv);
Usually, though, Bash will autodetect whether it should run interactively or not. Its failure to in your case may be because of missing /dev/* nodes.
The shell isn't interactive. Try adding -i to the list of arguments.
I can identify with wanting to do this yourself, but if you haven't already, check out jail chroot project and jailkit for some drop in tools to create a jail shell.
By the time your user is logging in and their shell tries to source this file, it's running under their UID. The chroot() system call is only usable by root -- you'll need to be cleverer than this.
Also, chrooting to a user's home directory will make their shell useless, as (unless they have a lot of stuff in there) they won't have access to any binaries. Useful things like ls, for instance.
Thanks for your help, guys,
I figured it out:
I forgot to setuid()/setgid(), chroot(), setuid()/setgid() back, then pass a proper environment using execve()
Oh, and, if I pass no argument to bash, it will source ~/.bashrc
If I pass "-l" if will source /etc/profile
Cheers!

Resources