What is common between environments within a shell terminal session? - bash

I have a custom shell script that runs each time a user logs in or identity is assumed, its been placed in /etc/profile.d and performs some basic env variable operations. Recently I added some code so that if screen is running it will reattach it without needing me to type anything. There are some problems however. If I log-in as root, and su - to another user, the code runs a second time. Is there a variable I can set when the code runs the first time that will prevent a second run of the code?
I thought to write something to the disk but then I dont want to prevent the code from running if I begin a new terminal session. Here is the code in question. It first attempts to reattach - if unsuccessful because its already attached (as it might be on an interruped session) it will 'take' the session back.
screen -r
if [ -z "$STY" ]; then
exec screen -dR
fi
Ultimately this bug prevents me from substituting user to another user because as soon as I do so, it grabs the screen session and puts me right back where I started. Pretty frustrating

The ${PPID} of the shell you get when you su will be the su command. So the output of
ps -o command= $PPID
will begin with the letters su, so test for this.

I think you can get this right if you read the following post (and the man for your favorite shell)
Question about login vs profile

Related

fish shell login commands keep running on screen or tmux session after login

I've just switched to fish-shell
And I've used the instructions of How do I run a command every login? What's fish's equivalent to .bashrc?
Which means I've moved the commands which i prefer to run upon login from .bashrc to ~/.config/fish/config.fish
But right now the commands keep running if i open screen or tmux session ! but before while i were using the default shell that's was never happens (meant that the commands were only run during the login and never re-run in screen session)
How to avoid this?
Thanks in advance.
You can test for the TERM environmental variable to see if your shell is running in such a session. Both screen and tmux by default set it to 'screen'.
if not string match --quiet -e $TERM 'screen'
<your startup scripts>
end
Note that other useful indicators are whether a shell is interactive or a login shell. You can use status --is-interactive and status --is-login to check for these two states.
In your specific case, a check for login shell might be what you are looking for:
if status --is-login
<your startup scripts>
end
See https://unix.stackexchange.com/questions/38175/difference-between-login-shell-and-non-login-shell for an explanation.

Safely exec tmux upon login

I would like to execute tmux upon logging into a shell for my user. I am using fish, but I think this question is relevant to any shell. So far, I've accomplished this by following the advice in this question: https://askubuntu.com/questions/560253/automatically-running-tmux-in-fish, specifically, adding the following line to my config.fish:
test $TERM != "screen"; and exec tmux
However, I have one major issue with this approach, and that is if tmux fails to start, perhaps if I've introduced a syntax error in my .tmux.conf file, the shell process immediately exits, booting me out of the session.
Is there a way to automatically run tmux in new shell executions whereby I can:
Catch errors and fallback on a "plain" shell execution (i.e. just fish without tmux)
Not have to exit a login twice - once to quit tmux then again to quit fish
?
I imagine tmux exits with a non-zero (i.e. failing) status if there's configuration errors, so you could presumably ditch the exec and exit manually, like
if test $TERM != "screen"
tmux
and exit
end
However, do keep in mind that fish always sources all of its config files, so you'll want to wrap this inside if status --is-login or similar.
This works for me:
if status --is-login
source $HOME/.config/fish/login.fish
tmux; and exec true
end
Obviously you may or may not have a login.fish file. I like to keep my config.fish lean by putting code that might not be needed for the current session in separate files so I've also got a interactive.fish script

unix - running a shell script in the background and creating an output log

What's the best way to run this shell script where I need to create an output log at the same time run it in the background? The catch is, I need to input a couple of parameters then enter a password.
For example I execute the shell script like so:
-bash-4.3$ ./tst.sh param1 param2 >> tst.log
Password for user mas:
I need to pass in (2) parameters, then prompted for a password:
./tsh.sh <param1> <param2>
This will work, but I have to keep the session open and I want it so it goes to the background or something similar where it will continue to run if my connection to the host fails..
If you want to run something that will survive if your connection fails you should run it in a screen or tmux session. You can use those to create sessions that you can disconnect from and reconnect to, and many other really cool things once you start really getting into them.
So if you ssh in and then run screen you'll still be at a bash prompt, but if you run a command then press ^a^d you will detach from that session. Everything running inside screen will keep going, and you'll be able to reconnect with screen -x later. You can have many screen sessions at the same time too, use screen -ls to see which are running then you can use screen -x <id> to reconnect to a particular one.

How to ssh into a shell and run a script and leave myself at the prompt

I am using elastic map reduce from Amazon. I am sshing into hadoop master node and executing a script like.
$EMR_BIN/elastic-mapreduce --jobflow $JOBFLOW --ssh < hivescript.sh . It sshes me into the master node and runs the hive script. The hivescript contains the following lines
hive
add jar joda-time-1.6.jar;
add jar EmrHiveUtils-1.2.jar;
and some commands to create hive tables. The script runs fine and creates the hive tables and everything else, but comes back to the prompt from where I ran the script. How do I leave it sshed into hadoop master node at the hive prompt.
Consider using Expect, then you could do something along these lines and interact at the end:
/usr/bin/expect <<EOF
spawn ssh ... YourHost
expect "password"
send "password\n"
send javastuff
interact
EOF
These are the most common answers I've seen (with the drawbacks I ran into with them):
Use expect
This is probably the most well rounded solution for most people
I cannot control whether expect is installed in my target environments
Just to try this out anyway, I put together a simple expect script to ssh to a remote machine, send a simple command, and turn control over to the user. There was a long delay before the prompt showed up, and after fiddling with it with little success I decided to move on for the time being.
Eventually I came back to this as the final solution after realizing I had violated one of the 3 virtues of a good programmer -- false impatience.
Use screen / tmux to start the shell, then inject commands from an external process.
This works ok, but if the terminal window dies it leaves a screen/tmux instance hanging around. I could certainly try to come up with a way to just re-attach to prior instances or kill them; screen (and probably tmux) can make it die instead of auto-detaching, but I didn't fiddle with it.
If using gnome-terminal, use its -x or --command flag (I'm guessing xterm and others have similar options)
I'll go into more detail on problems I had with this on #4
Make a bash script with #!/bin/bash --init-file as the shebang; this will cause your script to execute, then leave an interactive shell running afterward
This and #3 had issues with some programs that required user interaction before the shell is presented to them. Some programs (like ssh) it worked fine with, others (telnet, vxsim) presented a prompt but no text was passed along to the program; only ctrl characters like ^C.
Do something like this: xterm -e 'commands; here; exec bash'. This will cause it to create an interactive shell after your commands execute.
This is fine as long as the user doesn't attempt to interrupt with ^C before the last command executes.
Currently, the only thing I've found that gives me the behavior I need is to use cmdtool from the OpenWin project.
/usr/openwin/bin/cmdtool -I 'commands; here'
# or
/usr/openwin/bin/cmdtool -I 'commands; here' /bin/bash --norc
The resulting terminal injects the list of commands passed with -I to the program executed (no parms means default shell), so those commands show up in that shell's history.
What I don't like is that the terminal cmdtool provides feels so clunky ... but alas.

Automate a Ruby command without it exiting

This hopefully should be an easy question to answer. I am attempting to have mumble-ruby run automatically I have everything up and running except after running this simple script it runs but ends. In short:
Running this from terminal I get "Press enter to terminate script" and it works.
Running this via a cronjob runs the script but ends it and runs cli.disconnect (I assume).
I want the below script to run automatically via a cronjob at a specified time and not end until the server shuts down.
#!/usr/bin/env ruby
require 'mumble-ruby'
cli = Mumble::Client.new('IP Address', Port, 'MusicBot', 'Password')
cli.connect
sleep(1)
cli.join_channel(5)
stream = cli.stream_raw_audio('/tmp/mumble.fifo')
stream.volume = 2.7
print 'Press enter to terminate script';
gets
cli.disconnect
Assuming you are on a Unix/Linux system, you can run it in a screen session. (This is a Unix command, not a scripting function.)
If you don't know what screen is, it's basically a "detachable" terminal session. You can open a screen session, run this script, and then detach from that screen session. That detached session will stay alive even after you log off, leaving your script running. (You can re-attach to that screen session later if you want to shut it down manually.)
screen is pretty neat, and every developer on Unix/Linux should be aware of it.
How to do this without reading any docs:
open a terminal session on the server that will run the script
run screen - you will now be in a new shell prompt in a new screen session
run your script
type ctrl-a then d (without ctrl; the "d" is for "detach") to detach from the screen (but still leave it running)
Now you're back in your first shell. Your script is still alive in your screen session. You can disconnect and the screen session will keep on trucking.
Do you want to get back into that screen and shut the app down manually? Easy! Run screen -r (for "reattach"). To kill the screen session, just reattach and exit the shell.
You can have multiple screen sessions running concurrently, too. (If there is more than one screen running, you'll need to provide an argument to screen -r.)
Check out some screen docs!
Here's a screen howto. Search "gnu screen howto" for many more.
Lots of ways to skin this cat... :)
My thought was to take your script (call it foo) and remove the last 3 lines. In your /etc/rc.d/rc.local file (NOTE: this applies to Ubuntu and Fedora, not sure what you're running - but it has something similar) you'd add nohup /path_to_foo/foo 2>&1 > /dev/null& to the end of the file so that it runs in the background. You can also run that command right at a terminal if you just want to run it and have it running. You have to make sure that foo is made executable with chmod +x /path_to_foo/foo.
Use an infinite loop. Try:
while running do
sleep(3600)
end
You can use exit to terminate when you need to. This will run the loop once an hour so it doesnt eat up processing time. An infinite loop before your disconnect method will prevent it from being called until the server shuts down.

Resources