I created a user on my os(linux). I want to start the gotty with that user. But couldn't done it.
gotty -w -p (port) "su - username" didnt work. (Actually work but runned and close the connection)
I can only interact with gotty actively with this command: gotty -w -p (port) bash.
How can i change user with gotty and go on like bash on browser with using gotty cli?
Tihs core linux command run commands for a specific user
runuser -l $1 -c "gotty -w -once -p 3000
Related
I've got a script which needs to do something on a remote system using SSH. Something of this sort:
#!/bin/bash
ssh -tt $# sudo ash -c 'echo "8.8.8.8 dns.google.com" >> /etc/hosts'
If the user doesn't need to enter a password for sudo to work, this is fine. But I can't figure out how to allow the user running this script to enter the password for sudo. Ideas? The remote shell is busybox's ash.
I am new to shell scripting,
I am trying to write a script that'll run on my local machine.
Few of it's commands are to run on my local and then a few on the remote server.
Below is a sample script -
The 1st two will run on my local system,
rest of them are to run on the remote server.
eg -
scp -i permissions.pem someJar.jar ubuntu#ip:/var/tmp
ssh -i permissions.pem ubuntu#ip
sudo su
service someService stop
rm -rf /home/ubuntu/someJar.jar
rm -rf /home/ubuntu/loggingFile.log
mv /var/tmp/someJar.jar .
service someService start
As the script will run on my local machine,
How do make sure the 3rd and further commands take effect on the remote server and not on my machine?
Here's my sample.sh file -
scp -i permissions.pem someJar.jar ubuntu#ip:/var/tmp
SCRIPT="sudo su; ps aux | grep java; service someService stop; ps aux | grep java; service someService start; ps aux | grep java;"
ssh -i permissions.pem ubuntu#ip $SCRIPT
The scp is working, nothing is displayed after that.
You need to pass the reset of the script as a parameter to SSH. Try this format:
SCRIPT="sudo su; command1; command2; command3;"
ssh -i permissions.pem ubuntu#ip $SCRIPT
See: http://linuxcommand.org/man_pages/ssh1.html
Hope this helps.
Update:
The reason why you don't see anything after running the command is because sudo expects the password. To avoid this there are three solutions:
Give ubuntu user needed permissions to perform all the tasks in the script.
Pass the password to sudo under SCRIPT: echo 'password' | sudo su; ...
Modify sudo-er file and allow ubuntu user to sudo without prompting for password. Run sudo visudo and add the following line: ubuntu ALL = NOPASSWD : ALL
Each system admin will have a different approach to this problem. I think everyone will agree that option 2 is the least secure. The reset is up to debate. In my opinion option 3 is slightly more secure. Yet, entire server is compromised if your key is compromised. The most secure is option 1. While it is painful to assign individual permissions, by doing so you are limiting your exposure to assigned permissions in case if your key is compromised.
One more note: It might be beneficial to replace ; with && in the SCRIPT. By doing so you will ensure that second command will run only if the first one finished successfully, third will run only if second finished successfully and so on.
I want to execute sudo over ssh on remote servers and supply the password over standard input. The following code is in a shell script on a secured server with restricted access. The password is asked beforehand and the servers use all the same sudo password. The someaction can surely take some seconds to execute.
Here is the shell script extract:
read -s -p "please enter your sudo password" PASSWORD
ssh user#host1 -t "echo '$PASSWORD' | sudo -S someaction"
ssh user#host2 -t "echo '$PASSWORD' | sudo -S someaction"
My question: Is it safe to use echo with a pipe? And are here any security problems that might occur, like logging the echo result on the remote server, etc?
Maybe somebody has a better suggestion?
Note: I know other tools can do this, like ansible etc. I am not looking for another similar tool, just want to know whether using ssh/echo/sudo in the mentioned way is safe.
Yes!
As long as the command is running anybody that can view all processes can view that password, by running ps aux | grep echo:
root [..] zsh -c echo topsecret | sudo -C action
You could configure sudo to not ask the password for a specific task for a user, that would certainly increase security over this solution.
I'm trying to run a command with sudo on a remote machine. When I do it directly with
ssh -t -t -t myserver -q "sudo otheruser<<EOF
remotescript.sh
EOF"
it works fine, but if I add & at the end of the last line then it doesn't work. Why? How can I make it work?
I fact I'm running several such commands (to different servers) from a local script and save each output in a different file and would like them to run asynchronously.
Note: running ssh with otheruser#myserver is not an option. I really need to run sudo after I logged in.
Remove requiretty from sudo config (/etc/sudoers) on the remote machine.
Also add the -f option to ssh which puts the command in background (man: "must be used when ssh is run in the background").
The "&" should not be needed when using -f.
E.g:
ssh -f -t -t -t myserver -q "sudo otheruser<<EOF
remotescript.sh
EOF"
Use expect to control your ssh. It could be used to give automated response to the remote shell. Most processes when ran asynchronously suspends itself or becomes suspended when it tries to read input from terminal since another foreground process (the main shell) is using it.
There's a post about ssh and expect lately here: https://superuser.com/questions/509545/how-to-run-a-local-script-in-remote-server-using-expect-and-bash-script
Also try to disown your process after placing it on the background with disown to keep it from job control. e.g.
whole_command &
disown
Changing its input to /dev/null might also help but it could hang forever if it really needs input from user.
whole_command <&- < /dev/null &
disown
In the following context : VisualVM over ssh
I try to execute the 2 following commands in a single script:
ssh -D 9696 john.doe#121.122.123.124
/usr/bin/jvisualvm -J-Dnetbeans.system_socks_proxy=localhost:9696 \
-J Djava.net.useSystemProxies=true
Having the 2 command like this does not work because the ssh command starts in an interactive mode, so the VisualVM is started after the ssh is closed (explicitly with an 'exit').
What could be a good way to solve that issue?
PS. I am running MacOS X.
try:
ssh john.doe#121.122.123.124 '/usr/bin/jvisualvm -J-Dnetbeans.system_socks_proxy=localhost:9696 -J Djava.net.useSystemProxies=true'
If I understand your use case properly, you want to setup port-forwarding with the ssh connection then the second command is run on the localhost which uses the forwarded port on the localhost. I think you could try the -f or -n options to ssh to achieve this. It does however require a command to be run on the remotehost. You could use a bogus command like echo &> /dev/null for that.
EDIT:
Something like this seemed to work in a naïve test:
ssh -f -D <port> remotehost <dummy_program_that_doesnt_quit>
This is best done using an SSH key and screen, so that we interact with and can close the SSH session.
I'm also presuming jvisualvm takes control of the terminal so that when it exits, we clean up the screen session. If jvisualvm detaches from the terminal, the script immediately jumps to cleaning up the screen session while jvisualvm is running.
ssh-add .ssh/key
screen -dmS sshproxy ssh -i .ssh/key -D 9696 john.doe#121.122.123.124
/usr/bin/jvisualvm -J-Dnetbeans.system_socks_proxy=localhost:9696 \
-J Djava.net.useSystemProxies=true
screen -r -d sshproxy -X quit