I have couple of bash command that I want to execute and was hoping to get some help to write a script to execute them with one script if possible.
On one console, I want to execute a command iproxy 2222 22. The console will then print waiting for connection.
I'll have to open another console to execute a command ssh -p2222 root#localhost. Once I remote into a phone, I want to execute a simple command like ls.
I'm getting stuck on opening a second console and executing the command.
Can anyone give me some hint?
Thanks
You have several things here:
First, to run both the iproxy and the ssh in the same script, without having to use two different consoles you need to know how to run commands in background. This is easily done by appending & to the end of a command. In the next example the iproxy command will be run in background, and the ssh command will be run in the foreground at the same time:
iproxy 2222 22 &
ssh -p2222 root#localhost
Then, to execute a command over the remote shell opened by the ssh command you just need to include it as the last part of the ssh call. The next example will open a SSH connection to root#localhost on the port 2222, then it will execute a ls command in the remote shell, and finally it will close the SSH connection:
ssh -p2222 root#localhost ls
Finally, to launch a new terminal and execute a command (or a script) in it, you just need to call the type of terminal of your choice, using the -e option with the command (or the name of the script) to be executed. The next example will open a new gnome-terminal and will execute the previous ssh example:
gnome-terminal -e "ssh -p2222 root#localhost ls"
Alternatively you can open a new kconsole or a new xterm (or any other kind of terminal you may have installed in your system).
You will notice that the terminal will close itself after executing the command. If you need or want it to remain open, then you will have to modify the call according to the type of terminal you have opened:
For xterm you will need to use the -hold option.
For kconsole you will need to use the --noclose option.
For gnome-terminal this is a bit more tricky. The way to do it is to create a profile, then modify the profile preferences to hold the terminal when command exits, and referencing to this profile when calling the terminal: gnome-terminal --window-with-profile=NAMEOFTHEPROFILE -e command.
Putting all together, your script should be more or less like this:
#!/bin/bash
iproxy 2222 22 &
xterm -hold -e "ssh -p2222 root#localhost ls"
Related
This question has a good answer for how to put multiple command in an alias for bash.
But how would you do it in the case where you first need to ssh into a server, then do something like change a directory and then launch jupyter notebook?
I tried something like:
alias shortcut='ssh user#server -p 1234 -L 5678:localhost:91011; cd ~/somedir; jupyter notebook --ip=127.0.0.1
Maybe it's because my ssh requires me to type in a password, the last 2 commands aren't being executed.
There are some possible improvements for further convenience, if allowed by the system configuration.
If your need include executing a series of commands on the remote host, and you need to repeat this often, it's reasonable to put the commands in their own shell script and place it on the remote host.
For example in this case the script could be just
#!/bin/sh
cd ~/somedir && jupyter notebook --ip=127.0.0.1
Saving them in a file, add execution bit to it, and you can start the session like ssh user#server -p 1234 -L 5678:localhost:91011 path/to/script.sh
This is touched in this question but my preferred way is the low-score one about putting the script on remote -- I'd like to have each resource reside where they belong.
There's also the problem about what you want to do after starting the session. It seems the command is to start a server process that runs the Jupyter web service. If you just want to stay in the SSH session while monitoring the server, then the simple command should suffice. But if you want to keep the server in the background and log the output (and likely leave the SSH session for now) it's possible to run the server with nohup and redirect its output, by putting in the script something like
nohup jupyter notebook --ip="127.0.0.1" >> stdout.log 2>> stderr.log &
echo "$!" > jupyter-notebook.pid
The second command saves the PID in the file so it'll be easier to check or terminate it later without manually searching for the background process.
I am on a server and running a script file that has following code.
ssh username#servername
sudo su - root
cd /abc/xyz
mkdir asdfg
I am able to ssh... but then the next command is not working.. the script is not sudo-ing. any idea?
Edit: Able to create a mech id and then do the things.. though still looking for the answer to above question :|
First of all your command will "stuck" on the first line because it will go into an interactive mode. The ssh command will require a password to be provided by a user (unless there is an sshkey being used) . And if the ssh is logged into the remote server then it will wait for user commands from standard input.
Secondly the lines following the ssh command will be executed only when the first process has exited. This is why your script is not "sudoing" - it's waiting for the ssh to end.
So if your point is to run a command on a remote server then put the command as a parameter into the same line as ssh connection. In your case:
ssh user#server sudo su - root
But this will not be of satisfaction for you. I suggest you create a script of what you want to execute on the remote server and then execute the script.
ssh user#server scriptName
The sudo thing here is very tricky because again your script might get stuck in the interactive mode waiting for a password to be inserted so I suggest you think again on the basis of the script.
mb47!
You want to run the script on the remote computer, correct?
On the remote machine, create a file containing the commands you would like to execute.
Then, on the other machine, run ssh user#machine /path/to/script/you/created/earlier
I hope this helps!
ALinuxLover
I wish to run a script on the remote system and then wish to stay there.
Running following script:-
ssh user#remote logs.sh
This do run the script but after that I am back to my host system. i need to stay on remote one. I tried with..
ssh user#remote logs.sh;bash -l
somehow it solves the problem but still not working exactly as a fresh login as the command:-
ssh user#remote
Or it will be better if i could include something in my script that would open the bash terminal in the same directory where the script was running. Please suggest.
Try this:
ssh -t user#remote 'logs.sh; bash -l'
The quotes are needed to pass both commands to ssh. The -t option forces a pseudo-tty allocation.
Discussion
Consider:
ssh user#remote logs.sh;bash -l
When the shell parses this line, it splits it into two commands. The first is:
ssh user#remote logs.sh
This runs logs.sh on the remote machine. The second command is:
bash -l
This opens a login shell on the local machine.
The quotes were added above to prevent the shell from splitting up the commands this way.
What I try to do, I connect to a remote server as a normal user with sudo right, then sudo to root, and execute a command & see output in my local terminal. I wrote a small script like this:
#!/bin/bash
my_argument=$1
ssh -t username#hostname 'sudo su -; /path_to_my_script $1'
I type the password twice (one for ssh, the other for sudo), but I see nothing in my local terminal, and script looks terminated in remote host. I believe second problem could be resolved by using exit, but I am a little bit confused how I can get this output to my local terminal.
Thanks
String inside '' is taken literally. So, you are passing the dollar sign and 1 as a parameter to the script. If you want the string to be interpreted, place it inside "", like:
ssh -t username#hostname "sudo /path_to_my_script $1"
I'm trying to run a command with sudo on a remote machine. When I do it directly with
ssh -t -t -t myserver -q "sudo otheruser<<EOF
remotescript.sh
EOF"
it works fine, but if I add & at the end of the last line then it doesn't work. Why? How can I make it work?
I fact I'm running several such commands (to different servers) from a local script and save each output in a different file and would like them to run asynchronously.
Note: running ssh with otheruser#myserver is not an option. I really need to run sudo after I logged in.
Remove requiretty from sudo config (/etc/sudoers) on the remote machine.
Also add the -f option to ssh which puts the command in background (man: "must be used when ssh is run in the background").
The "&" should not be needed when using -f.
E.g:
ssh -f -t -t -t myserver -q "sudo otheruser<<EOF
remotescript.sh
EOF"
Use expect to control your ssh. It could be used to give automated response to the remote shell. Most processes when ran asynchronously suspends itself or becomes suspended when it tries to read input from terminal since another foreground process (the main shell) is using it.
There's a post about ssh and expect lately here: https://superuser.com/questions/509545/how-to-run-a-local-script-in-remote-server-using-expect-and-bash-script
Also try to disown your process after placing it on the background with disown to keep it from job control. e.g.
whole_command &
disown
Changing its input to /dev/null might also help but it could hang forever if it really needs input from user.
whole_command <&- < /dev/null &
disown