Switch from t-shell to bash and source file in one command line - bash

From my user on my machine, I ssh to a shared user on another machine that runs t-shell by default. I would like to create an alias that logs me in to the other machine as the shared user, cds to my personal folder on that machine, switches shell to bash, and sources a script which defines some additional aliases. How can I achieve this?
This is what I've tried so far. From my machine I run:
ssh -ty <otheruser>#<otherhost> 'cd <myfolder>; source tsh.personal'
On the other machine, I have the file ~/<myfolder>/tsh.personal which looks like
#!/bin/tsh
/bin/bash -c 'source ~/<myfolder>/bash.personal'
However, when I use the option -c for bash, it just runs the command and then exits, and then the connection to other machine closes because all comands passes to the ssh command has finished. I have also tried replacing the last row in ~/<myfolder>/tsh.personal with
/bin/bash -c 'source ~/<myfolder>/bash.personal; /bin/bash'
which tells bash to start another instance of bash, which won't exit. However, when that instance is started, it is like ~/<myfolder>/bash.personal was never sourced. Are all aliases reset whenever a new instance of bash is started, or why are the aliases not passed to the new instance?

Change tsh.personal to
exec /bin/bash --rcfile ~/<myfolder>/bash.personal
The exec isn't strictly necessary, but it cleans up the process table by replacing the tsh instance with a bash instance.

Related

How to launch WSL as if I've logged in?

I have a WSL Ubuntu distro that I've set up so that when I login 4 services start working, including a web API that I can test via Swagger to verify it is up and working.
I'm at the point where what I want to do now is start WSL via a script - that is, launch my distro, have all of the services start, and do it from Python. The problem is I cannot even figure out the correct syntax to get WSL to start from PowerShell in a manner where my services start.
Side note: "services" != systemctl (or similar) calls, but just executing bash CLI commands from either my .bashrc or .profile at login.
I've put the commands to execute in .profile & .bashrc. I've configured it both for root execution and non-root user execution. I've taken the commands out of those 2 files and put it into a script in the Windows file system that I pass in on the start of wsl. And I've put that shell script in the WSL file system as well. Nothing seems to work, and sometimes the distro starts and then stops after about 30 seconds.
Some of the PS CLI commands I've tried:
Start-Job -ScriptBlock{ wsl -d distro -u root }
Start-Job -ScriptBlock{ wsl -d distro -u root 'bash -i -l -c /root/bin/start.sh'
Start-Job -ScriptBlock{ wsl -d distro -u root 'bash -i -l -c .\start.sh'
wsl -d distro -u root -- bash -i -l -c /root/bin/start.sh
wsl -d distro -u root -- bash -i -l -c .\start.sh
wsl -d distro -u root -- /root/bin/start.sh
Permutations of the above that I've tried: replace root with my default login, and turning all of the Start-Job bash options into a comma-separated list of single-quoted strings (Ex: 'bash', '-i', '-l', ... ). Nothing I launch from the CLI will allow me access to the web API that is supposed to be hosted on my distro.
Any advice on what to try next?
Not necessarily an answer here as much as troubleshooting tips which will hopefully lead to an answer:
First, most of the forms that you are using seem to be correct. The only ones that absolutely shouldn't work are those that attempt to run the script from the Windows filesystem.
Make sure that you have a shebang line starting your script. I'm assuming you do, but other readers may come across this as well. For the moment, try this form:
#!/usr/bin/env -S bash -li
That's going to have the same effect as the bash -li you tried -- It will source both both interactive startup files such as ~/.bashrc as well as login profiles such as ~/.bash_profile (and /etc/profile.d/*, etc.).
Note that preferably, you won't need the -li. Best practice would be to move anything necessary for the services over from the startup scripts to your start.sh script, and avoid parsing the profile and rc. I need to go update some of my answers, since I just realized I've been guilty of giving some potentially bad advice ...
Specifically, though, I'm wondering if your interactive Bash config has something truly, well, "interactive" in it that might be preventing the automatic running of the script itself. Again, best practice would be for ~/.bashrc to only hold configuration that is needed for interactive shell sessions.
Make sure the script is set as executable (chmod +x start.sh). Again, I'm assuming this is the case for you.
With a shebang line and an executable script, use something like:
wsl -d distro -u root -e /root/bin/start.sh
The -e tells WSL to launch the script directly. Since it has a shebang line, it will be parsed by Bash. Most of the other forms you use above actually run Bash twice - Once when launching WSL and another when it finds the shebang line in the script.
Try some basic troubleshooting for your script like:
Add set -x to the top (right under the shebang line) to turn on script debugging.
Add a ps -efH at the end to show the processes that are running when the script completes
If needed, resort to quick-and-dirty echo statements to show where things have progressed in the script.
I'm hopeful that the above will at least show you the problem, but if not, add the debugging info that you gain from this to your question, and we can troubleshoot further.

A Bash script which runs commands in two terminal / ssh sessions

I'm trying to automate setting up and configuring a vagrant process with a bash script.
The thing is, I need to to ssh into my vagrant machine twice, and I want both terminals to be visible on my screen whilst doing this.
The process is like so...
In terminal 1:
vagrant up
vagrant ssh myhost
wait
cd /my/directory/
... do some commands...
Then I want this terminal to persist / stay open, and a new tab to open where another vagrant session starts
wait
cd /my/other/directory
.... do some commands...
I've got the script working for the first vagrant/terminal session and stored in my /bin/ directory, but how do I add the second?
How it looks exactly depends on the terminal emulator, but the basic pattern could be as follows:
First script (script1.sh)
vagrant up
vagrant ssh myhost
wait
cd /my/directory/
xterm -e script2.sh &
... do some commands...
Second script (script2.sh)
wait
cd /my/other/directory
.... do some commands...
The trick is to open another terminal window from the first script (for xterm its xterm -e).
In case you are interested in a way that works indepedently of the terminal emulator, consider using tmux (terminal multiplexer).
Other general hint: It is generally not recommended to store locally-created scripts under /bin. A more common place would be /usr/local/bin or $HOME/bin (although $HOME/bin might need to be configured separately).

Entering text into a docker container via ssh from bash file

What I am trying to do is setup a local development database and to prevent everyone having to go through all the steps I thought it would be useful to create a script.
What I have below stop once it is in the terminal, which looks like:
output
./dbSetup.sh
hash of container 0d1b182aa6f1
/ #
At which point I have to manually enter exit.
script
#!/bin/bash
command=$(docker ps | grep personal)
set $command
echo "hash of container ${1}"
docker exec -it ${1} sh
Is there a way I can inject a command via a script into a dockers container terminal?
In order to execute command inside a container, you can use something like this:
docker exec -ti my_container sh -c "echo a && echo b"
More information available at: https://docs.docker.com/engine/reference/commandline/exec/
Your script finds a running Docker container and opens a shell to it. The "-it" makes it interactive and allocates a tty which is why it continues to wait for input, e.g. "exit". If the plan is to execute some commands to initialize a local development database, I'd recommend looking at building an image with a Dockerfile instead. i.e. Once you figure out the commands to run, they would become RUN commands and the container after docker run would expose a local development database.
If you really want some commands to run within the shell after it is started and maintain the session, depending on the base image, you might be able to mount a bash profile that has the required commands, e.g. -v db_profile:/etc/profile.d where db_profile is a folder with the shell scripts you want to run. To get them to run you'd exec sh -l to have the login startup scripts to run.

How to set the command history in a Dockerfile

I'm running the docker container locally to troubleshoot its state. I don't always want to execute the RUN/ENTRYPOINT, I often want to get into the running container, do some things, and then run the RUN/ENTRYPOINT.
It would be super convenient to have the RUN/ENTRYPOINT available after I docker run bash by just pressing the up key. So I thought it would be nice if I could modify the history with history -s ... in the Dockerfile. That way, as soon as I docker run bash, I can just press up and have the RUN/ENTRYPOINT available.
When I put this in the docker file, I got this error:
/bin/sh: 1: history: not found
Is there a way to set the bash history in a Dockerfile?
You get the error because RUN commands run in /bin/sh, which has no history command available.
To make this work, you need to run an interactive bash shell during the build, so it will store your history entry.
RUN bash -ic 'history -s foobar'
That should leave behind a history file with foobar as its most recent (and probably only) entry.
You will see an error during build about ioctl... that is normal, because interactive bash expects to find a terminal, and there won't be one. But it should still work fine.
bash: cannot set terminal process group (1): Inappropriate ioctl for device
bash: no job control in this shell
Note that this will be stored for the user you run the command as. If your image switches to a non-root user with the USER statement, you should put this after the USER line so it is stored in the user that your image runs as.

Embedded terminal startup script

I usually use bash scripts to setup my environments (mostly aliases that interact with Docker), ie:
# ops-setup.sh
#!/bin/bash
PROJECT_NAME="my_awesome_project"
PROJECT_PATH=`pwd`/${BASH_SOURCE[0]}
WEB_CONTAINER=${PROJECT_NAME}"_web_1"
DB_CONTAINER=${PROJECT_NAME}"_db_1"
alias chroot_project="cd $PROJECT_PATH"
alias compose="chroot_project;COMPOSE_PROJECT_NAME=$PROJECT_NAME docker-compose"
alias up="compose up -d"
alias down="compose stop;compose rm -f --all nginx web python"
alias web_exec="docker exec -ti $WEB_CONTAINER"
alias db="docker exec -ti $DB_CONTAINER su - postgres -c 'psql $PROJECT_NAME'"
# ...
I'd like them to be run when I open the embedded terminal.
I tried Startup Tasks but they are not run in my terminal contexts.
Since I have a dedicated script for each of my projects, I can't run them from .bashrc or other.
How can I get my aliases automatically set at terminal opening ?
Today I'm running . ./ops-setup.sh manually each time I open a new embedded terminal.
You can create an alias in your .bashrc file like so:
alias ops-setup='bash --init-file <(echo '. /home/test/ops-setup.sh'; echo '. /home/test/.bashrc')'
If you call ops-setup, it will open up a new bash inside your terminal, and source .bashrc like it normally would, as well as your own script.
The only way I see to completely automate this is to modify the source code of your shell, e.g. bash, and recompile it. The files that are sourced are hardcoded into the source code.

Resources