Bash change password on boot - bash

* QUICK SOLUTION *
For those of you visiting this page based on the title solely and not wanting to read through everything below, or thinking everything below doesn't apply to your situation, maybe this will help... If all you are looking to do is change a users password on boot and are using Ubuntu 12.04 or similar, here is all you have to do. Add a script to start on boot containing the following:
printf "New Password\nRepeat Password\n" | passwd user
Keep in mind, this must be run as root, otherwise you will need to provide the original password like so:
printf "Original Password\nNew Password\nRepeat Password\n" | passwd user
* START ORIGINAL QUESTION *
I have a first boot script that sets up a VM by doing some configuration and file copies from a mounted iso. Basically the following happens:
VM boots for the first time.
/etc/rc.local is used to mount a CD ISO to /media/cdrom and execute /media/cdrom/boot.sh
The boot.sh file does some basic configuration, copies some files from CD to the VM and should update the users password, using the current password.
This part of the script fails. The password is not updating. I have tried the following:
VAR="1234test6789"
echo -e "DEFAULT\n$VAR\n$VAR" | passwd user
Basically the default VM is setup with a user (for example jack) with a default password (DEFAULT) The script above, using the default password updates to the new password stored in VAR. The script works by itself when logged in, but I cant get it to do the same on boot. I'm sure there is some sort of system policy or something that prevents this. If so, I need some sort of work around. This VM is being mass deployed and is packaged automatically and configured with a custom user password that is passed from the CD ISO.
Please help. Thank you!
* UPDATE *
Oh, and I'm using Ubuntu 12.04
* UPDATE *
I tried your suggestion. The following files directly in the rc.local ie the password does not update. The script is running however. I tested by adding the touch line.
touch /home/jack/test
VAR="1234test5678"
printf "P#ssw0rd\n$VAR\n$VAR" | passwd jack
P#ssw0rd is the example default VM password.
Jack is the example username.
* UPDATE *
Ok, we think the issue may be tied to rc.local. So rc.local is called really early on before run levels and may be causing the issue.
* UPDATE *
Well, potentially good news. The password seems to be updating now, but its updating to something other than what I set in $VAR. I think it might be adding something to it. This is ofcourse just a guess. Everytime I run the test, immediately after the script runs at boot I can no longer login with the username it was trying to update. I know that's not a lot of information to go on, but it's all I've got at the moment. Any ideas what or why its appending something else to the password?
* SOLUTION *
So there were several small problems as to why I could not get the suggestion below working. I won't outline them here as they are irrelevant. The ultimate solution was from Graeme tied in with some other features of my script which I will share below.
The default VM boots
rc.local does the following:
if [ -f /etc/program/tmp ]; then
mount -t iso9660 -o ro /dev/cdrom /media/cdrom
cd /media/cdrom
./boot.sh
fi
(The tmp file is there just to prevent the first boot script from running more than once. After boot.sh runs one, it removes that tmp file.)
boot.sh on the CDROM runs (with root privileges)
boot.sh copies files from the CDROM to /etc/program
boot.sh also updates the users password with the following:
VAR="DEFAULT"
cp config "/etc/program/config"
printf "$VAR\n$VAR\n" | passwd user
rm -rf /etc/program/tmp
(VAR is changed by another part of the server that is connected to our OVA deployment solution. Basically the user gets a customized, well random password for their VM so similar users cannot access each others VMs)
There is still some testing to be done, but I am reasonably satisfied that this issue is resolved. 95%

Edit - updated for not entering the original password
The sh version of echo does not have the -e option, unlike bash. Switch echo for printf. Also the rc.local script will have root privileges, so it won't prompt for the original password. Using that will cause the command to fail since 'DEFAULT' will be taken as the new password and the confirm will fail. This should work:
VAR="1234test6789"
printf "$VAR\n$VAR\n" | passwd user
Ubuntu uses dash at boot time, which is a drop in replacement for sh and is much more lightweight that bash. echo -e is a common bashism which doesn't work elsewhere.

Related

Changing user in bash script

I wanted to create an installation script for my raspberry pi which secures the default installation by configuring/hardening ssh, installing a firewall and fail2ban and finally to get rid off the default user of Raspbian. All other parts work but the final part always shows me an error.
The new user is created and added to the sudo group. After that I want to delete the old user 'pi'. As the script runs with sudo in the user context of 'pi' I thought I could solve this by switching to 'su' but I just get an error that the user couldn't be deleted as it is used by a process:
echo "Enter the new user name? Only lower case letters allowed!"
read user
sudo adduser $user && adduser $user sudo
echo "default user 'pi' will now be deleted"
su -c "deluser -remove-home pi"
If I check with 'users' the user 'pi' is gone but I can still log in with this account. How can I solve this problem inside the script?
I tried the answers I found here: How do I use su to execute the rest of the bash script as that user? and here: https://unix.stackexchange.com/questions/361327/how-to-login-as-different-user-inside-shell-script-and-execute-a-set-of-commands but nothing seem to work. I searched Google but I can't find any solution that works. Is it even possible what I'm trying to?
I usually add set -eux at the beginning of the bash script. This allows to debug and find typos and errors.
Try to switch user inside the script with
sudo -i -u ${user} $(command to delete pi here)
Think i found the cause of the problem. 'set -eux' was a great help:
deluser pi
Removing user `pi' ...
Warning: group `pi' has no more members.
userdel: user pi is currently used by process 445
/usr/sbin/deluser: `/usr/sbin/userdel pi' returned error code 8. Exiting.
I tried ps -fu pi to find the process which causes the trouble: it's /lib/systemd/systemd --user Is there a way to stop this process inside the script?

How to run command as another user

I have a shell script that makes a few calls to Asterisk at some point and shows some output. Calling Asterisk is the first thing I have tried that seems not to work. I determined the user I setup to run the script didn't have the permissions to run Asterisk, so I looked at ways to run it as root which would get around that (the only other user on the system).
I tried using su with no luck. For the past two hours, I've been messing with sudo and sudoers and not been able to get it working.
For example, here is some code called in my script, run by the user com:
printf "\n"
calls=`sudo "asterisk -rx 'core show channels'" | grep "active call"`
lastinboundcaller=`cat /var/log/asterisk/lastcaller.txt`
printf '%s\n' "Current Call Count: $calls"
printf '%s\n' "Last Inbound Caller: $lastinboundcaller"
Output:
[sudo] password for com:
sudo: asterisk -rx 'core show channels': command not found
Current Call Count:
Last Inbound Caller: Unknown
There are two problems here,
It's prompting for a password. Why it's prompting for the current user's password rather than the root password, I have no idea, but it shouldn't prompt for any password at all.
The Asterisk command asterisk -rx "command" is still not working — in other words, it's still failing to run the Asterisk shell, though it should have permission.
I tried updating my sudoers file and creating a new file in /etc/sudoers.d titled asterisk as well and putting my command in there.
My latest modification to that file was:
com ALL = (ALL:ALL) NOPASSWD: /usr/sbin/asterisk
Before that, I tried:
com ALL = (root) NOPASSWD: /usr/sbin/asterisk
My understanding is this should allow the user com to execute asterisk as sudo without a password. Clearly, something is not working.
I have followed the answers to numerous similar SO posts, like:
Use sudo without password INSIDE a script
https://unix.stackexchange.com/questions/18877/what-is-the-proper-sudoers-syntax-to-add-a-user
Unfortunately, despite following all the answers I've been able to find on this issue, none have worked for me.
Can anyone point me in the right direction here or suggest an alternative? I already consulted a Linux expert and this seems to be the right approach. This is all super easy to do in Windows and I'm surprised it's all this convoluted in Linux.
Don't quote the argument to sudo. It expects the first argument to be the name of the command, so it thinks the whole command line is the program name.
It should be
calls=`sudo asterisk -rx 'core show channels' | grep "active call"`
Why it's prompting for the current user's password rather than the root password, I have no idea, but it shouldn't prompt for any password at all.
That's how sudo works. It prompts for the current user's password, and checks /etc/sudoers to see if they're allowed to run the command. You're thinking of su, which prompts for the root password.

script for accessing remote server, get error log and rename it automatically.

Hi, my name is Evan, newbie on UNIX :)
i want to ask about scripting on unix. here is the case :
i have 4 unix server (with freeBSD OS), let call them "Gorrila's"
And one gateway server (also, with unix FreeBSD OS), Let call this one "Monkey's"
if i want access and login to Gorillas server, i have to using putty to access Monkey and then, from monkey doing ssh connection to enter Gorillas server.
The case is, my boss asking to me, to get an apache error log, everday, in fourth of gorrila's server.
All this time, i am doing manually. putty to monkeys - ssh to gorrilas - copy error log into monkey server using scp command and then, get error log with winscp from monkeys server.
the problem is :
how to make script with this case ?
how to rename automatically the error_log because, error log name in every server has a same name. which is "01_error.log". i had to rename it manually so they can't replace each other.
i hope, somebody can help me with this.
All, Thank you for your help and time. and sorry for the bad english language. :)
The easiest way to accomplish this would be to setup an automated job on Gorilla4.
Your first problem, is that you'll need to setup password-less SSH access between Gorilla4 and Monkey so you don't need a person to physically type in the password.
While you can do this with the 'root' user I would STRONGLY recommend against it.
Instead create a maintenance user on BOTH hosts:
$ useradd -m maintuser
Then switch to the new user and create SSH key on Gorilla4:
$ ssh-keygen -t rsa -b 2048
Accept the defaults when prompted. Then copy the id_rsa.pub file to the ~/.ssh directory of the maintuser on Monkey.
Now, when you are the "maintuser" on Gorilla4, you can SSH to Monkey without a password.
Then you can create a script called "copy_log.sh":
#!/bin/bash
# copy_log.sh
log_path="/path/to/logdir"
log_name="01_error.log"
target_host="monkey"
echo "copying ${log_name} to ${target_host}..."
# note: $(hostname) below will add "Gorilla4" to the name of the file
scp ${log_path}/${log_name} maintuser#${target_host}:/path/to/dest/$(hostname)_${log_name} || {
echo "Failed to scp file"
exit 2
}
echo "completed successfully"
Make it executable:
$ chmod +x copy_log.sh
Add it to the maintuser's crontab on Gorilla4 to run at whatever time you would nomrally do it yourself, say 8am everyday:
00 08 * * * /path/to/copy_log.sh >> /some/log/dir/copy_log.out 2>&1
Hope this helps; if nothing else, it will give you plenty to Google :)

In a bash script executed on boot, how do I get the username of the user just logged-in?

I need to execute a bash script on boot.
To do so I created a file
/etc/init.d/blah
I edited it and added the following lines
#! /bin/sh
# /etc/init.d/blah
touch '/var/lock/blah'
username1=$(id -n -u)
username2=$(whoami)
touch '/var/lock/1'${username1}
touch '/var/lock/2'${username2}
exit 0
The script is execute with root privileges (which is what I need because I have to use mount inside this script) .. but the problem is that I also need to know the username of the user who has just logged-in beacuse my goal is to mount a certain folder to a certain mount-point depending on the username, like
mount -o bind /home/USERNAME/mount-point /media/data/home/USERNAME/to-be-mounted
Going back to the boot script, if I do
sudo update-rc.d blah defaults
and then reboot and log-in with my username (let's say john) both ways to get username in my script produce root in fact I've got 3 files
/var/lock/blah
/var/lock/1root
/var/lock/2root
So, how can I get the username of the user who just logged-in? (john in my example)
EDITED:
I solved in this way:
1. I created a .desktop file for each user I need to perform automount on boot to autostart a script on boot (I'm on LXDE) and put it on /home/{username}/.config/autostart
[Desktop Entry]
Type=Application
Exec=bash "/path/to/mount-bind.sh"
2. I stored in that path a bash script called mount-bind.sh and made it executable:
#!/bin/bash
_username=$1
if [[ -z "${_username}" ]]; then
_username="$(id -u -n)"
fi
mkdir -p "/home/${_username}/mount-folder"
sudo mount -o bind "/media/data/home/${_username}/mount-folder" "/home/${_username}/mount-folder"
exit 0
3. I added the following line to /etc/sudoers
%nopwd ALL=(ALL) NOPASSWD: /bin/mount
4. I created the nopwd group and added to it all the users I need
In his way after login I can mount the path under the user home.
Problem with this method is that I have to create the desktop file for each new user and add him/her to nopwd, but it works.
Any further improvement is welcome! :)
I think you should move from a boot time init script to a script executed at login time under the logged-in user. To allow this, you should look into ways to allow your users to execute the mount command you need. Depending on what you are trying to achieve, one of the following methods may help you:
Assuming you are on Linux or some other UNIX with a similar feature, add the mountpoint to /etc/fstab with the user option, allowing normal users to mount the entry.
Execute mount through sudo with a suitably narrow sudoers configuration as to not allow users to execute any mount commands.
Write a suid-root program in c which executes the required mount commands when called. This however is very tricky to get right without creating gaping security holes.
Login does not happen at boot time. You cannot foretell which user is going to log in when booting.
Try Exporting the logindetails and use it.
export username2=$(whoami)

How can I automate running commands remotely over SSH to multiple servers in parallel?

I've searched around a bit for similar questions, but other than running one command or perhaps a few command with items such as:
ssh user#host -t sudo su -
However, what if I essentially need to run a script on (let's say) 15 servers at once. Is this doable in bash? In a perfect world I need to avoid installing applications if at all possible to pull this off. For argument's sake, let's just say that I need to do the following across 10 hosts:
Deploy a new Tomcat container
Deploy an application in the container, and configure it
Configure an Apache vhost
Reload Apache
I have a script that does all of that, but it relies on me logging into all the servers, pulling a script down from a repo, and then running it. If this isn't doable in bash, what alternatives do you suggest? Do I need a bigger hammer, such as Perl (Python might be preferred since I can guarantee Python is on all boxes in a RHEL environment thanks to yum/up2date)? If anyone can point to me to any useful information it'd be greatly appreciated, especially if it's doable in bash. I'll settle for Perl or Python, but I just don't know those as well (working on that). Thanks!
You can run a local script as shown by che and Yang, and/or you can use a Here document:
ssh root#server /bin/sh <<\EOF
wget http://server/warfile # Could use NFS here
cp app.war /location
command 1
command 2
/etc/init.d/httpd restart
EOF
Often, I'll just use the original Tcl version of Expect. You only need to have that on the local machine. If I'm inside a program using Perl, I do this with Net::SSH::Expect. Other languages have similar "expect" tools.
The issue of how to run commands on many servers at once came up on a Perl mailing list the other day and I'll give the same recommendation I gave there, which is to use gsh:
http://outflux.net/unix/software/gsh
gsh is similar to the "for box in box1_name box2_name box3_name" solution already given but I find gsh to be more convenient. You set up a /etc/ghosts file containing your servers in groups such as web, db, RHEL4, x86_64, or whatever (man ghosts) then you use that group when you call gsh.
[pdurbin#beamish ~]$ gsh web "cat /etc/redhat-release; uname -r"
www-2.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-2.foo.com: 2.6.9-78.0.1.ELsmp
www-3.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-3.foo.com: 2.6.9-78.0.1.ELsmp
www-4.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-4.foo.com: 2.6.18-92.1.13.el5
www-5.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-5.foo.com: 2.6.18-92.1.13.el5
[pdurbin#beamish ~]$
You can also combine or split ghost groups, using web+db or web-RHEL4, for example.
I'll also mention that while I have never used shmux, its website contains a list of software (including gsh) that lets you run commands on many servers at once. Capistrano has already been mentioned and (from what I understand) could be on that list as well.
Take a look at Expect (man expect)
I've accomplished similar tasks in the past using Expect.
You can pipe the local script to the remote server and execute it with one command:
ssh -t user#host 'sh' < path_to_script
This can be further automated by using public key authentication and wrapping with scripts to perform parallel execution.
You can try paramiko. It's a pure-python ssh client. You can program your ssh sessions. Nothing to install on remote machines.
See this great article on how to use it.
To give you the structure, without actual code.
Use scp to copy your install/setup script to the target box.
Use ssh to invoke your script on the remote box.
pssh may be interesting since, unlike most solutions mentioned here, the commands are run in parallel.
(For my own use, I wrote a simpler small script very similar to GavinCattell's one, it is documented here - in french).
Have you looked at things like Puppet or Cfengine. They can do what you want and probably much more.
For those that stumble across this question, I'll include an answer that uses Fabric, which solves exactly the problem described above: Running arbitrary commands on multiple hosts over ssh.
Once fabric is installed, you'd create a fabfile.py, and implement tasks that can be run on your remote hosts. For example, a task to Reload Apache might look like this:
from fabric.api import env, run
env.hosts = ['host1#example.com', 'host2#example.com']
def reload():
""" Reload Apache """
run("sudo /etc/init.d/apache2 reload")
Then, on your local machine, run fab reload and the sudo /etc/init.d/apache2 reload command would get run on all the hosts specified in env.hosts.
You can do it the same way you did before, just script it instead of doing it manually. The following code remotes to machine named 'loca' and runs two commands there. What you need to do is simply insert commands you want to run there.
che#ovecka ~ $ ssh loca 'uname -a; echo something_else'
Linux loca 2.6.25.9 #1 (blahblahblah)
something_else
Then, to iterate through all the machines, do something like:
for box in box1_name box2_name box3_name
do
ssh $box 'commmands_to_run_everywhere'
done
In order to make this ssh thing work without entering passwords all the time, you'll need to set up key authentication. You can read about it at IBM developerworks.
You can run the same command on several servers at once with a tool like cluster ssh. The link is to a discussion of cluster ssh on the Debian package of the day blog.
Well, for step 1 and 2 isn't there a tomcat manager web interface; you could script that with curl or zsh with the libwww plug in.
For SSH you're looking to:
1) not get prompted for a password (use keys)
2) pass the command(s) on SSH's commandline, this is similar to rsh in a trusted network.
Other posts have shown you what to do, and I'd probably use sh too but I'd be tempted to use perl like ssh tomcatuser#server perl -e 'do-everything-on-one-line;' or you could do this:
either scp the_package.tbz tomcatuser#server:the_place/.
ssh tomcatuser#server /bin/sh <<\EOF
define stuff like TOMCAT_WEBAPPS=/usr/local/share/tomcat/webapps
tar xj the_package.tbz or rsync rsync://repository/the_package_place
mv $TOMCAT_WEBAPPS/old_war $TOMCAT_WEBAPPS/old_war.old
mv $THE_PLACE/new_war $TOMCAT_WEBAPPS/new_war
touch $TOMCAT_WEBAPPS/new_war [you don't normally have to restart tomcat]
mv $THE_PLACE/vhost_file $APACHE_VHOST_DIR/vhost_file
$APACHECTL restart [might need to login as apache user to move that file and restart]
EOF
You want DSH or distributed shell, which is used in clusters a lot. Here is the link: dsh
You basically have node groups (a file with lists of nodes in them) and you specify which node group you wish to run commands on then you would use dsh, like you would ssh to run commands on them.
dsh -a /path/to/some/command/or/script
It will run the command on all the machines at the same time and return the output prefixed with the hostname. The command or script has to be present on the system, so a shared NFS directory can be useful for these sorts of things.
Creates hostname ssh command of all machines accessed.
by Quierati
http://pastebin.com/pddEQWq2
#Use in .bashrc
#Use "HashKnownHosts no" in ~/.ssh/config or /etc/ssh/ssh_config
# If known_hosts is encrypted and delete known_hosts
[ ! -d ~/bin ] && mkdir ~/bin
for host in `cut -d, -f1 ~/.ssh/known_hosts|cut -f1 -d " "`;
do
[ ! -s ~/bin/$host ] && echo ssh $host '$*' > ~/bin/$host
done
[ -d ~/bin ] && chmod -R 700 ~/bin
export PATH=$PATH:~/bin
Ex Execute:
$for i in hostname{1..10}; do $i who;done
There is a tool called FLATT (FLexible Automation and Troubleshooting Tool) that allows you to execute scripts on multiple Unix/Linux hosts with a click of a button. It is a desktop GUI app that runs on Mac and Windows but there is also a command line java client.
You can create batch jobs and reuse on multiple hosts.
Requires Java 1.6 or higher.
Although it's a complex topic, I can highly recommend Capistrano.
I'm not sure if this method will work for everything that you want, but you can try something like this:
$ cat your_script.sh | ssh your_host bash
Which will run the script (which resides locally) on the remote server.
Just read a new blog using setsid without any further installation/configuration besides the mainstream kernel. Tested/Verified under Ubuntu14.04.
While the author has a very clear explanation and sample code as well, here's the magic part for a quick glance:
#----------------------------------------------------------------------
# Create a temp script to echo the SSH password, used by SSH_ASKPASS
#----------------------------------------------------------------------
SSH_ASKPASS_SCRIPT=/tmp/ssh-askpass-script
cat > ${SSH_ASKPASS_SCRIPT} <<EOL
#!/bin/bash
echo "${PASS}"
EOL
chmod u+x ${SSH_ASKPASS_SCRIPT}
# Tell SSH to read in the output of the provided script as the password.
# We still have to use setsid to eliminate access to a terminal and thus avoid
# it ignoring this and asking for a password.
export SSH_ASKPASS=${SSH_ASKPASS_SCRIPT}
......
......
# Log in to the remote server and run the above command.
# The use of setsid is a part of the machinations to stop ssh
# prompting for a password.
setsid ssh ${SSH_OPTIONS} ${USER}#${SERVER} "ls -rlt"
Easiest way I found without installing or configuring much software is using plain old tmux. Say you have 9 linux servers. Pick a box as your main. Start a tmux session:
tmux
Then create 9 split tmux panes by doing this 8 times:
ctrl-b + %
Now SSH into each box in each pane. You'll need to know some tmux shortcuts. To navigate, press:
ctrl+b <arrow-keys>
Once your logged in to all your boxes on each pane. Now turn on pane synchronization where it lets you type the same thing into each box:
ctrl+b :setw synchronize-panes on
now when you press any keys, it will show up on every pane. to turn it off, just make on to off. to cycle resize panes, press ctrl+b < space-bar >.
This works alot better for me since I need to see each terminal output as sometimes servers crash or hang for whatever reason when downloading or upgrade software. Any issues, you can just isolate and resolve individually.

Resources