I am trying to create a shell script that will notify me (not email) if certain users log on to a linux machine - bash

Basically I have a list of user names in a text file that I need to watch. All i need is a simple script to notify me if they log on to the system, but not email me.

This is a brute force solution, using SSH to connect to the server if you specify a user#host combo, or it checks to local machine. It assumes passwordless public-key access and it uses the last command to poll the last logged on user every 30s.
The command notify-send is used for the pop-up which assumes a desktop Linux machine.
#!/bin/sh
host=$1
while true; do
if [ "$host" ]; then
last_user=`ssh $host last`
else
last_user=`last`
fi
last_user=`echo $last_user | head -n1 | awk '{ print $1,$2,$3 }'`
if [ "$last_user" != "$previous_user" ]; then
notify-send $last_user
previous_user=$last_user
fi
sleep 30
done

There is a package called notify-send which would do this. You would have to script logins into notify-send, which in turn would put a notification bubble on screen.

Related

How to use STDOUT inside /etc/ssh/sshrc without breaking SCP

I want to call a program when any SSH user logs in that prints a welcome message. I did this by editing the /etc/ssh/sshrc file:
#!/bin/bash
ip=`echo $SSH_CONNECTION | cut -d " " -f 1`
echo $USER logged in from $ip
For simplicity, I replaced the program call with a simple echo command in the example
The problem is, I learned SCP is sensitive to any script that prints to stdout in .bashrc or, apparently, sshrc. My SCP commands failed silently. This was confirmed here: https://stackoverflow.com/a/12442753/2887850
Lots of solutions offered quick ways to check if the user is in an interactive terminal:
if [[ $- != *i* ]]; then return; fi link
Fails becase [ is not linked
case $- in *i* link
Fails because in is not recognized?
Use tty program (same as above)
tty gave me a bizarre error code when executed from sshrc
While all of those solutions could work in a normal BASH environment, none of them work in the sshrc file. I believe that is because PATH (and I suspect a few other things) aren't actually available when executing from sshrc, despite specifying BASH with a shebang. I'm not really sure why this is the case, but this link is what tipped me off to the fact that sshrc is running in a limited environment.
So the question becomes: is there a way to detect interactive terminal in the limited environment that sshrc executes in?
Use test to check $SSH_TTY (final solution in this link):
test -z $SSH_TTY || echo $USER logged in from $ip

Challenge in setting up the script which connects to multiple servers as an autosys job

The purpose of the script is to login to multiple servers and execute the df -k command and to send out email containing the list similar like below :
Thu Nov 3 12:59:49 EDT 2016
Running out of space "/opt (80%)" on (a******001s02)
Running out of space "/var (83%)" on (a*******01s01)
Running out of space "/opt/IBM/ITM (98%)" on (a*******001s01)
Running out of space "/apps (80%)" on (a*********01s01)
Running out of space "/opt/wily (80%)" on (a********01s01)
My challenge is :
This script has to run as a scheduled job like autosys job . And the challenge is that, what i believe(not 100 percent sure) , the username ( ssh $username#$host)
should be a sudo user for it to run as an autosys job. But if i hardcode username as a sudo user , the ssh command wont work . As in , we cannot login to multiple servers with username as sudo.
Here I have used 'logname' , so logname will take the name of the user executing the command .But , logname doesnt seem to work if i have to make it as autosys job .
Basically what I want is to make this script run as an autosys job. Currently its working perfectly fine If I can manually run the script .I appreciate any help. Thanks in advance :)
Entire script:
#Shell script to watch disk space
#!/bin/sh
MAILTO="xxxxxxx#xxxxx.com"
# Set alert level 80% is default
ALERT=70
path=/tmp/Joshma
report=$path/report.txt
date > $report
echo >>$report
servers=$path/serversfile.txt
username=`logname`
for host in `cat $servers`
do
echo "$host"
ssh $username#$host df -k | grep -vE '^Filesystem|tmpfs|cdrom|proc' | awk '{ print $4 " " $7 }' | while read output;
do
echo " $output"
usep=$(echo $output | awk '{ print $1}' | cut -d'%' -f1 )
partition=$(echo $output | awk '{ print $2 }' )
if [ $usep -ge $ALERT ]
then
echo "Running out of space \"$partition ($usep%)\" on ($host)\n" >>$report
fi
done
done
cat $report | mail -s "File System Space Check" $MAILTO
The job owner value in a Autosys job can be any user ID that can logon to the server (The machine value) that is executing the AutoSys job. The AutoSys agent will spawn a child process that executes the command as that user ID. Use the ID that you are using to manual run the script as the job owner if you can. If not make sure that the ID that you use has the appropriate permissions to ssh to your server and execute the commands. It is not a AutoSys requirement that the job owner value have sudo on a server.
If the script runs manually but fails in the AutoSys job using the same ID and server, start looking at the environment set in child process. By default AutoSys sources a generic profile that may not be the same as the one set when logging in manually with the ID. You may have to set the profile: value in the job to source a profile file to set the environment correctly when the child process is started.

How can I start an ssh session with a script without redirecting stdin?

I have a series of bash commands, some with interactive prompts, that I need run on a remote machine. I have to have them called in a certain order for different scenarios, so I've been trying to make a bash script to automate the process for me. However, it seems like every way to start an ssh session with a bash script results in the the redirection of stdin to whatever string or file was used to initiate the script in the first place.
Is there a way I can specify that a certain script be executed on a remote machine, but also forward stdin through ssh to the local machine to enable the user to interact with any prompts?
Here's a list of requirements I have to clarify what I'm trying to do.
Run a script on a remote machine.
Somewhere in the middle of that remote script be command that will prompt for input. Example: git commit will bring up vim.
If that command is git commit and it brings up vim, the user should be able to interact with vim as if it was running locally on their machine.
If that command prompts for a [y/n] response, the user should be able to input their answer.
After the user enters the necessary information—by quitting vim or pressing return on a prompt—the script should continue to run like normal.
My script will then terminate the ssh session. The end product is that commands were executed for the user without them needing to be aware that it was through a remote connection.
I've been testing various different methods with the following script that I want run on the remote machine.
#!/bin/bash
echo hello
vim
echo goodbye
exit
It's crucial that the user be able to use vim, and then, when the user finishes, "goodbye" should be printed to the screen and the remote session should be terminated.
I've tried uploading a temporary script to the remote machine and then running ssh user#host bash /tmp/myScript, but that seems to also take over stdin completely, rendering it impossible to let the user respond to prompts for user input. I've tried adding the -t and -T options (I'm not sure if they're different), but I still get the same result.
One commenter mentioned using expect, spawn, and interact, but I'm not sure how to use those tools together to get my desired behavior. It seems like interact will result in the user gaining control over stdin, but then there's no way to have it relinquished once the user quits vim in order to let my script continue execution.
Is my desired behavior even possible?
Ok, I think I've found my problem. I was creating a wrapper script for ssh that looked like this:
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands=$(</dev/stdin)
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
It was there that I was redirecting stdin, not ssh. I should have mentioned this when I formulated my question. I read through that script over and over again, but I guess I just overlooked that one line. Removing that line totally fixed my problem.
Just to clarify, changing my script to the following totally fixed my problem.
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands="$#"
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
Once I changed my wrapper script, my test script described in the question worked! I was able to print "hello" to the screen, vim appeared and I was able to use it like normal, and then once I quit vim "goodbye" was printed and the ssh client closed.
The commenters to the question were pointing me in the right direction the whole time. I'm sorry I only told part of my story.
I've searched for solutions to this problem several times in the past, however never finding a fully satisfactory one. Piping into ssh looses your interactivity. Two connects (scp/ssh) is slower, and your temporary file might be left lying around. And the whole script on the command line often ends up in escaping hell.
Recently I encountered that the command line buffer size is usually quite large (getconf ARG_MAX > 2MB where I looked). And this got me thinking about how I could use this and mitigate the escaping issue.
The result is:
ssh -t <host> /bin/bash "<(echo "$(cat my_script | base64 | tr -d "\n")" | base64 --decode)" <arg1> ...
or using a here document and cat:
ssh -t <host> /bin/bash $'<(cat<<_ | base64 --decode\n'$(cat my_script | base64)$'\n_\n)' <arg1> ...
I've expanded on this idea to produce a fully working BASH example script sshx that can run arbitrary scripts (not just BASH), where arguments can be local input files too, over ssh. See here.

mac crontab open application behaves differently than opening from Dock

i wrote a shell script that checks if Microsoft Lync is running and Opens the app if not running. if i execute the shell script directly from bash terminal, it opens up Lync and as Remember Username/password is clicked it logs on fine.
the same script i entered in crontab entries to be run every minute so i can start Lync if not running already. But for some reason when the Lync is opened from this crontab script execution, the Lync application does not auto sign-in and asks me for the Lync password.
why is this behavior different ?
crontab entry -
* 9-17 * * 1-5 $HOME/lync.sh
#!/bin/bash
LYNC_PID=$(launchctl list | grep "Lync" | awk '{print $1}')
if [ "$LYNC_PID" = "" ]
then
echo "Lync not running"
ERROR_REPORTER_PID=$(ps -ef | grep -i "[m]icrosoft error reporting" | awk '{print $2}')
if [ "$ERROR_REPORTER_PID" != "" ]
then
echo "Killing Microsoft Error Reporter"
kill -9 $ERROR_REPORTER_PID
fi
echo "Starting Lync"
open /Applications/Microsoft\ Lync.app
fi
The Dock and other interactive commands running in a session have access to the session's information, including your keyring and the screen. Cron has none of this. Attempting to run interactive programs from crontab is doomed to fail, in pesky corner cases if not outright in the regular main usage scenario.

bash overriding a single line in a text file with another while using variables

Overview: I am trying to make a script that will take a list of machines and manually update their /etc/shadow files with a new root passwd. I know this isn't the best method but my boss wants this process automated. we are using a application called puppet for 90% of the update but some machines failed the update or can't have puppet installed, hence this dodgy fix.
(sorry for any stupid errors its only my 3rd week using any unix product, I have been a windows admin my whole life)
Issue:
I need to ssh into the PC's update the /etc/shadow file but only change the root user (not all systems have the same users and I don't want to remove any of those users in the process) I have gotten as far as being able to extract the current user in line 1 through ssh, then check if that user is indeed the root user but I am stuck on then updating the /etc/shadow file on the new machine as my boss has asked that the following standards happen.
I can't have any real user interaction in the script, so no manually typing the new passwd.
I am not allowed to have the new passwd displayed anywhere in clear text (inside the script or in another file)
Ok hopefully that's enough info onto the code.
root=user
unknown='unknown.txt'
filelines=`cat $unknown`
prod='new-shadow'
ohf='option-one-holding-file'
pel=prod-errorlog
for line in $filelines ; do
echo "Attempting to fix $line please wait"
ssh -oBatchMode=yes -l $user $line "awk '{if (NR==1) print \$0}' /etc/shadow" >> $ohf
if grep -q "root:" $ohf ; then
echo "root user located updating to produtcion password"
# ** This is the line that doesn't work **
ssh -oBatchMode=yes -l $user $line "sed -i '1s/.*/$prod/' /etc/shadow"
else
echo "unable to find root user this will require a manual fix this server will be listed in
the prod-errorlog file"
echo "$line" >> $pel
fi
done
The line in bold the sed line doesn't work I know why it doesn't work but I have no idea how to fix it at all, thank you to anyone who takes the time to look at this, I know the codes a bit of a mess, please forgive me.
To replace only the first line:
"echo '$prod' > /etc/shadow.new; tail -n +1 /etc/shadow >> /etc/shadow.new; mv -f /etc/shadow.new /etc/shadow"
Sorry for my previous wrong argument wrong: The '$prod' part in your script is correct, and is expanded OK. Yet $prod contains many reserved characters for regular expressions. Now this new version just create a new file (replacing the first line) and then move/overwrite on to the target one.

Resources