I could quickly go through the snmp installation and it works fine.
In one of the agent modules I am currently looking into and trying to modify the source. I came across an issue where I need to remove the user by the agent.
Stuck with to complete this:
Just like the way net-snmp-create-v3-user creates an user at server side I was wondering if there is something similar to remove the user.
In my understandings, the net-snmp-create-v3-user would simply do the following:
service stop snmpd
$EDITOR /var/lib/net-snmp/snmpd.conf
[add *usmUser* lines]
$EDITOR /etc/snmp/snmpd.conf
[add *rouser* and *rwuser* lines]
service start snmpd
The snmpd should be stopped before adding new user data in the .conf files.
Equivalent to net-snmp-create-v3-user, removing an user would be something similar:
service stop snmpd
$EDITOR /var/lib/net-snmp/snmpd.conf
[find and remove *usmUser* info]
$EDITOR /etc/snmp/snmpd.conf
[find and remove *rouser* and *rwuser* info]
service start snmpd
Rather than printable characters, the usmUser fields are expressed as hex strings. They are simply not encrypted.
I just had a similar issue. I had added an user, and wanted to delete it again. However, net-snmp removes the createUser statements from the /var/net-snmp/snmpd.conf file for security reasons, thus Ashwin kumar's answer did not work for me (* see EDIT below).
snmpusm has a delete option, which can be used to remove users. snmpusm requires another user to authenticate the delete request (I haven't tested without, but I would assume that the other user has to have RW access). The following example has enabled me to remove a user from my snmp configuration:
snmpusm -v 3 -u <RWUSER> -l authNoPriv -a MD5 -A <PASSWORD_OF_RWUSER> localhost delete <USERNAME_TO_DELETE>
This solution is inspired by this page http://www.mkssoftware.com/docs/man1/snmpusm.1.asp which also describes how to create a user and change the Passphrase of a user with snmpusm.
EDIT: My bad, I didn't notice that the /var/net-snmp/snmpd.conf actually contained more lines than what vim displayed without scrolling. The "usmUser" lines that Ashwin mentions are there. I haven't tried to remove the lines, but I assume that would work as well.
Related
I was provided a tool to do a SSH to a remote host. The remote host is a new docker to be created. I was trying to understand if there are commands being executed right after the SSH (i.e. probably using ssh -t <some commands>).
It seems like the .bash_history does not include those cmds. In such case, what else can I do to figure out what cmds being executed right after my login? Thank you.
To find out the actual commands that are executed, you could add "set -v" or "set -x" to the shell initialization file(s) on the system you are ssh-ing to.
See man bash (the "INVOCATION" section) to find out which files will executed so that you can figure out which file to add the "set" command to.
You will probably want to do that temporarily ... because the output is verbose.
Another approach would be to configure sshd to set the logging level to DEBUG and see what commands are requested. However, note that sshd DEBUG logging is a user privacy violation.
If you are trying to do this kind of stuff to find out what is happening on the first "boot" of a docker instance, try putting the (temporarily) config changes into the docker image that you are starting.
The bash history only contains command lines that are submitted to the shell via a shell command prompt.
I have a shell script that makes a few calls to Asterisk at some point and shows some output. Calling Asterisk is the first thing I have tried that seems not to work. I determined the user I setup to run the script didn't have the permissions to run Asterisk, so I looked at ways to run it as root which would get around that (the only other user on the system).
I tried using su with no luck. For the past two hours, I've been messing with sudo and sudoers and not been able to get it working.
For example, here is some code called in my script, run by the user com:
printf "\n"
calls=`sudo "asterisk -rx 'core show channels'" | grep "active call"`
lastinboundcaller=`cat /var/log/asterisk/lastcaller.txt`
printf '%s\n' "Current Call Count: $calls"
printf '%s\n' "Last Inbound Caller: $lastinboundcaller"
Output:
[sudo] password for com:
sudo: asterisk -rx 'core show channels': command not found
Current Call Count:
Last Inbound Caller: Unknown
There are two problems here,
It's prompting for a password. Why it's prompting for the current user's password rather than the root password, I have no idea, but it shouldn't prompt for any password at all.
The Asterisk command asterisk -rx "command" is still not working — in other words, it's still failing to run the Asterisk shell, though it should have permission.
I tried updating my sudoers file and creating a new file in /etc/sudoers.d titled asterisk as well and putting my command in there.
My latest modification to that file was:
com ALL = (ALL:ALL) NOPASSWD: /usr/sbin/asterisk
Before that, I tried:
com ALL = (root) NOPASSWD: /usr/sbin/asterisk
My understanding is this should allow the user com to execute asterisk as sudo without a password. Clearly, something is not working.
I have followed the answers to numerous similar SO posts, like:
Use sudo without password INSIDE a script
https://unix.stackexchange.com/questions/18877/what-is-the-proper-sudoers-syntax-to-add-a-user
Unfortunately, despite following all the answers I've been able to find on this issue, none have worked for me.
Can anyone point me in the right direction here or suggest an alternative? I already consulted a Linux expert and this seems to be the right approach. This is all super easy to do in Windows and I'm surprised it's all this convoluted in Linux.
Don't quote the argument to sudo. It expects the first argument to be the name of the command, so it thinks the whole command line is the program name.
It should be
calls=`sudo asterisk -rx 'core show channels' | grep "active call"`
Why it's prompting for the current user's password rather than the root password, I have no idea, but it shouldn't prompt for any password at all.
That's how sudo works. It prompts for the current user's password, and checks /etc/sudoers to see if they're allowed to run the command. You're thinking of su, which prompts for the root password.
I want to use ssh passwordless-login using authentication-key-pairs.
I added
eval `ssh-agent -s`
ssh-add ~/.ssh/my_p_key
to ~/.profile. This doesn't work. If I use the ~/.bashrc it works fine.
Why do I have to set this every time I call a bash instead of every time the user logges in. I could not find any explanation.
Is there no better way to configure this?
The answer below solved my problem and for me it looks like a very legit solution.
Add private key permanently with ssh-add on Ubuntu
The
eval `ssh-agent -s`
sets the environment of the process. If you are using a window manager e.g. on a linux machine then the window manager will likely have a possibility to run a program like e.g. your ssh-agent on startup and passing the environment down to all processes started there, so all your terminal windows/tabs will allow you a passwordless login. The exact location, configuration and behaviour depends on the desktop/wm used.
If you are on a non-window-system, then you might look at the output of the ssh-agent call and paste that into all shells you open, however that may be as complicated as entering your password. The output will likely set something like
SSH_AGENT_PID=4041
SSH_AUTH_SOCK=/tmp/ssh-WZANnlaFiaBt/agent.3966
and you can access pw-less in all places where you set these
Just adding "IdentityFile ~/.ssh/config" to my .ssh/config did not work for me. I had to also add "AddKeysToAgent yes" to that file.
I think this extra line is necessary because gnome-Keyring loads my key but it has a bug that prevents it from being forwarded to the machine I ssh into.
Edit: After upgrading to Ubuntu 20.04, new terminals that I started could not access the ssh-agent. I had to add eval (ssh-agent -c) &>/dev/null to my ~/.config/fish/config.fish file.
* QUICK SOLUTION *
For those of you visiting this page based on the title solely and not wanting to read through everything below, or thinking everything below doesn't apply to your situation, maybe this will help... If all you are looking to do is change a users password on boot and are using Ubuntu 12.04 or similar, here is all you have to do. Add a script to start on boot containing the following:
printf "New Password\nRepeat Password\n" | passwd user
Keep in mind, this must be run as root, otherwise you will need to provide the original password like so:
printf "Original Password\nNew Password\nRepeat Password\n" | passwd user
* START ORIGINAL QUESTION *
I have a first boot script that sets up a VM by doing some configuration and file copies from a mounted iso. Basically the following happens:
VM boots for the first time.
/etc/rc.local is used to mount a CD ISO to /media/cdrom and execute /media/cdrom/boot.sh
The boot.sh file does some basic configuration, copies some files from CD to the VM and should update the users password, using the current password.
This part of the script fails. The password is not updating. I have tried the following:
VAR="1234test6789"
echo -e "DEFAULT\n$VAR\n$VAR" | passwd user
Basically the default VM is setup with a user (for example jack) with a default password (DEFAULT) The script above, using the default password updates to the new password stored in VAR. The script works by itself when logged in, but I cant get it to do the same on boot. I'm sure there is some sort of system policy or something that prevents this. If so, I need some sort of work around. This VM is being mass deployed and is packaged automatically and configured with a custom user password that is passed from the CD ISO.
Please help. Thank you!
* UPDATE *
Oh, and I'm using Ubuntu 12.04
* UPDATE *
I tried your suggestion. The following files directly in the rc.local ie the password does not update. The script is running however. I tested by adding the touch line.
touch /home/jack/test
VAR="1234test5678"
printf "P#ssw0rd\n$VAR\n$VAR" | passwd jack
P#ssw0rd is the example default VM password.
Jack is the example username.
* UPDATE *
Ok, we think the issue may be tied to rc.local. So rc.local is called really early on before run levels and may be causing the issue.
* UPDATE *
Well, potentially good news. The password seems to be updating now, but its updating to something other than what I set in $VAR. I think it might be adding something to it. This is ofcourse just a guess. Everytime I run the test, immediately after the script runs at boot I can no longer login with the username it was trying to update. I know that's not a lot of information to go on, but it's all I've got at the moment. Any ideas what or why its appending something else to the password?
* SOLUTION *
So there were several small problems as to why I could not get the suggestion below working. I won't outline them here as they are irrelevant. The ultimate solution was from Graeme tied in with some other features of my script which I will share below.
The default VM boots
rc.local does the following:
if [ -f /etc/program/tmp ]; then
mount -t iso9660 -o ro /dev/cdrom /media/cdrom
cd /media/cdrom
./boot.sh
fi
(The tmp file is there just to prevent the first boot script from running more than once. After boot.sh runs one, it removes that tmp file.)
boot.sh on the CDROM runs (with root privileges)
boot.sh copies files from the CDROM to /etc/program
boot.sh also updates the users password with the following:
VAR="DEFAULT"
cp config "/etc/program/config"
printf "$VAR\n$VAR\n" | passwd user
rm -rf /etc/program/tmp
(VAR is changed by another part of the server that is connected to our OVA deployment solution. Basically the user gets a customized, well random password for their VM so similar users cannot access each others VMs)
There is still some testing to be done, but I am reasonably satisfied that this issue is resolved. 95%
Edit - updated for not entering the original password
The sh version of echo does not have the -e option, unlike bash. Switch echo for printf. Also the rc.local script will have root privileges, so it won't prompt for the original password. Using that will cause the command to fail since 'DEFAULT' will be taken as the new password and the confirm will fail. This should work:
VAR="1234test6789"
printf "$VAR\n$VAR\n" | passwd user
Ubuntu uses dash at boot time, which is a drop in replacement for sh and is much more lightweight that bash. echo -e is a common bashism which doesn't work elsewhere.
I want to install a software library (SWIG) on a list of computers (Jenkins nodes). I'm using the following script to automate this somewhat:
NODES="10.8.255.70 10.8.255.85 10.8.255.88 10.8.255.86 10.8.255.65 10.8.255.64 10.8.255.97 10.8.255.69"
for node in $NODES; do
scp InstallSWIG.sh root#$node:/root/InstallSWIG.sh
ssh root#$node sh InstallSWIG.sh
done
This way it's automated, except for the password request that occur for both the scp and ssh commands.
Is there a way to enter the passwords programmatically?
Security is not an issue. I’m looking for solutions that don’t involve SSH keys.
Here’s an expect example that sshs in to Stripe’s Capture The Flag server and enters the password automatically.
expect <<< 'spawn ssh level01#ctf.stri.pe; expect "password:"; send "e9gx26YEb2\r";'
With SSH the right way to do it is to use keys instead.
# ssh-keygen
and then copy the *~/.ssh/id_rsa.pub* file to the remote machine (root#$node) into the remote user's .ssh/authorized_keys file.
You can perform the task using empty, a small utility from sourceforge. It's similar to expect but probably more convenient in this case. Once you have installed it, your first scp will be accomplished by following two commands:
./empty -f scp InstallSWIG.sh root#$node:/root/InstallSWIG.sh
echo YOUR_SECRET_PASSWORD | ./empty -s -c
The first one starts your command in the background, tricking it into thinking it's running in interactive mode on a terminal. The other one sends it data from stdin. Of course, putting your password anywhere on command line is risky due to shell history being preserved, users being able to see it in ps results etc. Not secure either, but a bit better thing would be to store the password in a file and redirect the second command's input from that file instead of using echo and a pipe.
After copying to the server, you can run the script in a similar manner:
./empty -f ssh root#$node sh InstallSWIG.sh
echo YOUR_SECRET_PASSWORD | ./empty -s -c
You could look into setting up passwordless ssh keys for that. Establishing Batch Mode Connections between OpenSSH and SSH2 is a starting point, you'll find lots of information on this topic on the web.
Wes' answer is the correct one but if you're keen on something dirty and slow, you can use expect to automate this.