A Bash script which runs commands in two terminal / ssh sessions - bash

I'm trying to automate setting up and configuring a vagrant process with a bash script.
The thing is, I need to to ssh into my vagrant machine twice, and I want both terminals to be visible on my screen whilst doing this.
The process is like so...
In terminal 1:
vagrant up
vagrant ssh myhost
wait
cd /my/directory/
... do some commands...
Then I want this terminal to persist / stay open, and a new tab to open where another vagrant session starts
wait
cd /my/other/directory
.... do some commands...
I've got the script working for the first vagrant/terminal session and stored in my /bin/ directory, but how do I add the second?

How it looks exactly depends on the terminal emulator, but the basic pattern could be as follows:
First script (script1.sh)
vagrant up
vagrant ssh myhost
wait
cd /my/directory/
xterm -e script2.sh &
... do some commands...
Second script (script2.sh)
wait
cd /my/other/directory
.... do some commands...
The trick is to open another terminal window from the first script (for xterm its xterm -e).
In case you are interested in a way that works indepedently of the terminal emulator, consider using tmux (terminal multiplexer).
Other general hint: It is generally not recommended to store locally-created scripts under /bin. A more common place would be /usr/local/bin or $HOME/bin (although $HOME/bin might need to be configured separately).

Related

How to launch WSL as if I've logged in?

I have a WSL Ubuntu distro that I've set up so that when I login 4 services start working, including a web API that I can test via Swagger to verify it is up and working.
I'm at the point where what I want to do now is start WSL via a script - that is, launch my distro, have all of the services start, and do it from Python. The problem is I cannot even figure out the correct syntax to get WSL to start from PowerShell in a manner where my services start.
Side note: "services" != systemctl (or similar) calls, but just executing bash CLI commands from either my .bashrc or .profile at login.
I've put the commands to execute in .profile & .bashrc. I've configured it both for root execution and non-root user execution. I've taken the commands out of those 2 files and put it into a script in the Windows file system that I pass in on the start of wsl. And I've put that shell script in the WSL file system as well. Nothing seems to work, and sometimes the distro starts and then stops after about 30 seconds.
Some of the PS CLI commands I've tried:
Start-Job -ScriptBlock{ wsl -d distro -u root }
Start-Job -ScriptBlock{ wsl -d distro -u root 'bash -i -l -c /root/bin/start.sh'
Start-Job -ScriptBlock{ wsl -d distro -u root 'bash -i -l -c .\start.sh'
wsl -d distro -u root -- bash -i -l -c /root/bin/start.sh
wsl -d distro -u root -- bash -i -l -c .\start.sh
wsl -d distro -u root -- /root/bin/start.sh
Permutations of the above that I've tried: replace root with my default login, and turning all of the Start-Job bash options into a comma-separated list of single-quoted strings (Ex: 'bash', '-i', '-l', ... ). Nothing I launch from the CLI will allow me access to the web API that is supposed to be hosted on my distro.
Any advice on what to try next?
Not necessarily an answer here as much as troubleshooting tips which will hopefully lead to an answer:
First, most of the forms that you are using seem to be correct. The only ones that absolutely shouldn't work are those that attempt to run the script from the Windows filesystem.
Make sure that you have a shebang line starting your script. I'm assuming you do, but other readers may come across this as well. For the moment, try this form:
#!/usr/bin/env -S bash -li
That's going to have the same effect as the bash -li you tried -- It will source both both interactive startup files such as ~/.bashrc as well as login profiles such as ~/.bash_profile (and /etc/profile.d/*, etc.).
Note that preferably, you won't need the -li. Best practice would be to move anything necessary for the services over from the startup scripts to your start.sh script, and avoid parsing the profile and rc. I need to go update some of my answers, since I just realized I've been guilty of giving some potentially bad advice ...
Specifically, though, I'm wondering if your interactive Bash config has something truly, well, "interactive" in it that might be preventing the automatic running of the script itself. Again, best practice would be for ~/.bashrc to only hold configuration that is needed for interactive shell sessions.
Make sure the script is set as executable (chmod +x start.sh). Again, I'm assuming this is the case for you.
With a shebang line and an executable script, use something like:
wsl -d distro -u root -e /root/bin/start.sh
The -e tells WSL to launch the script directly. Since it has a shebang line, it will be parsed by Bash. Most of the other forms you use above actually run Bash twice - Once when launching WSL and another when it finds the shebang line in the script.
Try some basic troubleshooting for your script like:
Add set -x to the top (right under the shebang line) to turn on script debugging.
Add a ps -efH at the end to show the processes that are running when the script completes
If needed, resort to quick-and-dirty echo statements to show where things have progressed in the script.
I'm hopeful that the above will at least show you the problem, but if not, add the debugging info that you gain from this to your question, and we can troubleshoot further.

why Jenkins shell script hangs when i run sudo pm2 ls

I confess I am total newbie to Jenkins.
I have
Jenkins-tls
installed on my Mac for experimentation.
I have a remote server that I testing with.
My Jenkins script is ultra simple.
ssh to the remote machine
sudo pm2 ls
the last command just hangs
I run the same 2 commands from the command line and it all works perfectly.
FYI, I need sudo for pm2 since I need to be root to run pm2, without sudo, I get access denied.
Any thoughts?
I believe you make the invalid assumption that jenkins somehow "types" commands after starting ssh to the remote session's command shell. This is not what happens. Instead, it will wait for the ssh command to finish, and only then execute the next command sudo pm2 ls. This never happens, because the ssh session never terminates. You observe this as a "hang".
How to solve this?
If there's only a small number of commands, you can use ssh to run them with
ssh user#remote sudo mp2 ls
ssh user#remote command arg1 arg2
If this gets longer, why not place all commands in a remote script and just run it with
ssh user#remote /path/to/script

cygwin: vagrant ssh, empty command prompt

If I vagrant ssh with windows cmd, I get a nice command prompt, like that:
vagrant#homestead:~$ echo foo
vagrant#homestead:~$ foo
But with cygwin and mintty, I have no prompt at all:
echo foo
foo
I see it has to do with "pseudo-tty allocation".
With cygwin and mintty, I can have my prompt with this :
vagrant ssh -- -t -t
How can I change cygwin and mintty so that I don't have to tell the -t ?
About the ssh -t option :
"Force pseudo-tty allocation. This can be used to execute arbi-
trary screen-based programs on a remote machine, which can be
very useful, e.g., when implementing menu services. Multiple -t
options force tty allocation, even if ssh has no local tty."
I had the same problem with and the solution was to set the VAGRANT_PREFER_SYSTEM_BIN environment variable to get vagrant to use your normal ssh executable.
You can do:
VAGRANT_PREFER_SYSTEM_BIN=1 vagrant ssh
or put this into your .bash_profile:
export VAGRANT_PREFER_SYSTEM_BIN=1
Reference: https://github.com/hashicorp/vagrant/issues/9143#issuecomment-343311263
I run in the same problem described above. But only on one of three PCs. But as a workaround I am doing:
# save the config to a file
vagrant ssh-config > vagrant-ssh
# run ssh with the file.
ssh -F vagrant-ssh default
From an answer of How to ssh to vagrant without actually running "vagrant ssh"?
In this case I am getting the prompt and what's more important also history cycling and ctrl-c etc. are working properly.
Vagrant is a windows program managing Virtual machine
https://www.vagrantup.com/intro/index.html
as such it does not well interface with the pseudo tty
structure used by cygwin programs.
Read for reference on similar issues with a lot of other windows program
https://github.com/mintty/mintty/issues/56
Mintty is a Cygwin program. It expect interactive program running inside it to use the cygwin tty functionality for interactive behaviour.
Running Vagrant from Bash in Windows CMD, make CMD the terminal control so Vagrant has no problem in the interactive behaviour.
I do not see the need to run Vagrant inside Cygwin
Since vagrant is windows-based, I use ConEmu instead of cygwin's shell (mintty)
choco install conemu via chocolatey and it works
General solution is to teach vagrant to use ssh, compatible with preferred terminal. Like Cygwin ssh+mintty.
Modern Vagrant (v2.1.2) has VAGRANT_PREFER_SYSTEM_BIN=1 by default on Windows.
To troubleshoot issue:
VAGRANT_LOG=info vagrant ssh
In v2.1.2 they broke Cygwin support. See my bug report with hack to lib/vagrant/util/ssh.rb to make it work.

run ssh script into ubuntu instance do something, when exit, stay in ubuntu

I am running a very simple script that will ssh into a remote ubuntu instance, move around the directory structure execute a few things, then I want the prompt to stay in Ubuntu. When the script ends, in ends back at the local prompt. How do I make modify the script so that it finishes with the remote prompt?
local$ ssh -i xxx.pem ubuntu#xxx.ap-region.compute.amazonaws.com \
"cd virtualenv; ls -lh;"
There are two things needed to be added to your commandline:
The bash command in the end starts the bash shell (you can start any other you want)
The -t switch will make sure the remote server will allocate you TTY and your shell will work as expected:
local$ ssh -t -i xxx.pem ubuntu#xxx.ap-region.compute.amazonaws.com \
"cd virtualenv; ls -lh; bash"

Switch from t-shell to bash and source file in one command line

From my user on my machine, I ssh to a shared user on another machine that runs t-shell by default. I would like to create an alias that logs me in to the other machine as the shared user, cds to my personal folder on that machine, switches shell to bash, and sources a script which defines some additional aliases. How can I achieve this?
This is what I've tried so far. From my machine I run:
ssh -ty <otheruser>#<otherhost> 'cd <myfolder>; source tsh.personal'
On the other machine, I have the file ~/<myfolder>/tsh.personal which looks like
#!/bin/tsh
/bin/bash -c 'source ~/<myfolder>/bash.personal'
However, when I use the option -c for bash, it just runs the command and then exits, and then the connection to other machine closes because all comands passes to the ssh command has finished. I have also tried replacing the last row in ~/<myfolder>/tsh.personal with
/bin/bash -c 'source ~/<myfolder>/bash.personal; /bin/bash'
which tells bash to start another instance of bash, which won't exit. However, when that instance is started, it is like ~/<myfolder>/bash.personal was never sourced. Are all aliases reset whenever a new instance of bash is started, or why are the aliases not passed to the new instance?
Change tsh.personal to
exec /bin/bash --rcfile ~/<myfolder>/bash.personal
The exec isn't strictly necessary, but it cleans up the process table by replacing the tsh instance with a bash instance.

Resources