How to stop\close\kill tcp connection on macOS - macos

First of all this is not a duplicate. Overall my question is:
I have another application currently running on macOS and I want to cut (sever or close or stop) it tcp connection from terminal. The problem is I don't want to kill process 'cause this is a solution what I found in another answers. + I have an access to sudo and I know the PID.
What I did and it doesn't work:
lsof -i TCP:X | awk '/LISTEN/ {print $2}' | xargs kill -9
I tried to change X to particular value which I got from this command
sudo lsof -i -n -P | grep TCP
The second thing placed here https://www.scm.keele.ac.uk/staff/stan/2016/05/16/closing-sockets-without-killing-processes/
But from that line lldb -p $PID I got an error like this:
error: attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries when the attached failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
Maybe I missed something or maybe I should find a special program for my purpose? The Windows I see have one -> https://www.nirsoft.net/utils/cports.html
I'm really curious about it 'cause all answers which I found suggests users to kill all process. But I don't want it.

Ok.. just try to learn code from here -> https://github.com/doug-leith/appFirewall
Thanks dude for providing minus feedback for me. But my answer is the only one on internet which lead to working firewall!

Related

Close "<app> quit unexpectedly" window from terminal/bash

Is there a way to close/kill " < app> quit unexpectedly" window from terminal or bash script? What's the process name?
(AppleScript automation solutions are not acceptible.)
You can:
killall UserNotificationCenter
It will kill the UserNotificationCenter (an ALL it's opened windows too), so the message disappears. (Don't worry, the next error message will start is again automatically.)
But, (IMHO) it is better to use the osascript command in a form:
osascript -l JavaScript <<EOS
... apple-scripting using JavaScript ...
EOS
IMHO JavaScript is much easier to understand (for an common programmer) as the "standard" applescript.
You can disable it appearing in the first place by:
defaults write com.apple.CrashReporter DialogType none
Other possible values are developer† (show stack traces for all processes) and crashreport (the default).
This also means that no entries will be written to the Console.app though. The dialog itself is shown by the UserNotificationCenter and can be disabled (along with many other notifications) by:
sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.UserNotificationCenter.plist
Some context:
Mach has a concept of exception ports. Each thread/process has a task, process and a host exception port, which are checked when an exception occurs. The CrashReporter daemon registers the host exception port and gets activated when no other signal handler ran. It then creates a stack trace and memory map of the process and instructs the UserNotificationCenter to show it. By default it only does so for GUI applications.
On High Sierra, I had to use defaults write com.apple.CrashReporter -string "developer"
I am not sure if apple hast the same core utilities, but I come from the unix world too.
for example: the solution would be to find the process id via name. on my linux system I can use the following to find a process id...
ps -aux
An other variation would be top. both give a ton of information and I have to filter the code with grep.
After that I would filter the string via cut or sed.
last but not least the kill command.
the script should look some like this ...
#!/bin/sh
PNAME="< app> quit unexpectedly"
ps -aux | grep "$PNAME" | cut -d" " -f2 | kill
but be warned, this script can make huge damage if u dont know how to use it.
To be hornest I would never use some like this, instead would execute kill manually..

Bash writing output to file with using timeout results in error

Im using this script to monitor iBeacon bluetooth devices and it works as expected.
sudo beacon scan -c
However i recently changed it to just run for a few seconds and output the result to a file like so:
sudo timeout 5 beacon scan -c > result.txt
Problem is that this outputs nothing to there is probably an error in the command. Also writing error stream to the file gives me an error.
sudo timeout 5 beacon scan -c &> result.txt
Contents of result.txt:
Set scan parameters failed: Input/output error
It feels like bash is trying to apply &> result.txt as parameters to the beacon scan command. Im not very good at bash so there is probably a simple solution to this problem but i haven't found one!
Some programs designed to be interrupted with ctrl-c don't behave the same when terminated with sigterm, which is what timeout will send by default. Try using the option -s INT to have timeout send sigint instead.

TMOUT not working if user opens files

I'm managing small HPC linux cluster.
To close SSH session if it's inactive for a while, I configured TMOUT in the /etc/profile.
It works fine if users are on terminal without any file handling.
But, it does not work if users are staying with file handling such as editing file with vi editor, or print results on the terminal using tail -f commmand.
Those file handling are staying long time exceeding TMOUT variable.
Please let me know how to close this kind of SSH session.
Thanks in advanced.
Answer is simple: Its not suppose to work in those cases. tail -f something printed on STDOUT same in case of open files using vi.
You would not want TMOUT close your connection when you are monitoring a log file or editing a file.
If you are intended to kill such sessions as well then:
1) You can get idle time from w
2) Filter all those TTY which come in your exceeding idle time
3) lsof /dev/TTY_value | egrep '^bash or your default shell'| awk '{print $2}'
4) kill pids you get in step 3
I hope this will help your cause. But it's risky.

Spawn subshell for SSH and continue with program flow

I'm trying to write a shell script that automates certain startup tasks based on my location (home/campusA/campusB). I go to University and take classes in two different campuses (hence campusA/campusB). My location is determined by which wireless network I'm connected to. For the purposes of this script, we can assume that I will be connected to one of these networks when the script is called and my script knows which one I'm connected to based on a call to iwconfig.
This is what I want it to do:
cat file1 > file2 # always do this, regardless of where I am
if Im at home:
start tweetdeck, thunderbird, skype
else if Im at campusA:
activate the login script # I need to login on a webform before I get internet access.
# I have written a script to automate this.
# Wait for this script to finish before doing anything else
myProg2 & # I want myProg2 running in the background until I shutdown my computer.
else if Im at campusB:
ssh username#domain # this is the problematic line
myProg2 & # I want myProg2 running in the background until I shutdown my computer.
start tweetdeck, thunderbird
close the terminal with the "exit" command
The problem is that campusB's wireless network is behind a firewall, which grants me internet access ONLY after I successfully ssh by username#domain. After a successful ssh, I need to keep the terminal window active in order to hold keep the internet access. If I close the terminal window, I lose internet access (this is bad).
When I try doing just ssh username#domain, the script stops because I don't exit the ssh command. I can't ^C out of it, which means that the rest of the script is never executed. I also have the same problem if I just close the terminal window in an attempt to kill the ssh session.
Some googling brought me to subshell, which I'm either using wrong or can't use to solve my problem. So how should I go about solving this problem? I'd appreciate any help - I've been at this for a while now and am unable to find anything helpful. If it makes a difference, I'd rather not store my ssh password in the script
Further, ampersanding the ssh call (ssh username#domain &) doesn't seem to do any good (can anyone explain why?)
Thank you in advance
EDIT
I must clarify, that the ssh connection has to be active in order for me to have internet access. Thus, when I close the terminal window, I need the ssh connection to still be active.
I had a script that looped on 6 servers, calling via ssh in the background. In 1 part of the script, there was a mis-behaving vendor application; the application didn't 'let go' of the connection properly. (other parts of the script using ssh in background worked fine).
I found that using ssh -t -t cured the problem. Maybe this can help you too.
(a teammate found this on the web, and we had spent so much time, I never went back to read the article that suggested this. The man page on our system gave no hint that such a thing was possible)
Hope this helps.
You may want to try to double background myProg2 to detach it from the tty:
# cf. "Wizard Boot Camp, Part Six: Daemons & Subshells",
# http://www.linux-mag.com/id/5981
(myProg2 &) &
Another option may be to use the daemon tool from the libslack package:
http://ingvar.blog.linpro.no/2009/05/18/todays-sysadmin-tip-using-libslack-daemon-to-daemonize-a-script/
Having a ssh with pseudy tty on background shell
In addition to #shellter's answer, I would like make some precision:
where #shelter said:
The man page on our system gave no hint that such a thing was possible
On my system (Debian 7 GNU/Linux), if I hit:
man -Pcol\ -b ssh| grep -A3 '^ *-t '
I could read:
-t Force pseudo-tty allocation. This can be used to execute arbi‐
trary screen-based programs on a remote machine, which can be
very useful, e.g. when implementing menu services. Multiple -t
options force tty allocation, even if ssh has no local tty.
Yes: Multiple -t options force tty allocation, even if ssh has no local tty.
This mean: If you remotely run a tool that require access to pseudo terminal ( pty like /dev/pts/0), you could run them by using -t switch.
But this would work only if ssh is run from a shell console (aka having his own pty). If you plan to run them is shell session without console, like background scripts, you may use Multiple -t to enforce pseudo tty allocation from ssh.
Multiple ssh shell on one ssh connection
In addition to answers from #tommy and #geekosaur, I would make some precision:
#tommy point to a very intersting feature of ssh. Not sure this have a lot to do with answer, but speaking around long time connection, this feature has to be clearly understood.
Once a connection is established, ssh could (and know how to) use them to drive a lot of thing in this one connection:
-L let you drive remote TCP connections to local machines/network. (full syntax is: -L localip:localport:distip:distport) where localip could be specified to permit other hosts from same local domain to access same tcp bind, and distip could by any host from distant network ( not only localhost ) sample: -L192.168.1.31:8443:google.com:443 permit any host from local domain to reach google through your host: http://192.168.1.31:8443
-R Same remarks in reverse way!
-M Tell ssh to open a local unix socket for bindind next ssh consoles. Simply open two terminal window. First in both window, hit: ssh somewhere than hit netstat -tan | grep :22 or netstat -tan | grep 192.168.1.31:22 (assuming 192.168.1.31 is your onw host's ip)
Than compare close all your ssh session and in first terminal, hit: ssh -M somewhere and in second, simply ssh somewhere. you may see in second terminal:
$ ssh somewhere
+ ssh somewhere
Last login: Mon Feb 3 08:58:01 2014 from elsewhere
If now you hit netstat -tan | grep 192.168.1.31:22 (on any of two oppened ssh session;) you must see that there is only one tcp connection.
This kind of features could be used in combination with -L and maybe some sleep 86399...
To work around a tcp killer router that close every inactive TCP connection from more than 120 seconds, I run:
ssh -M somewhere 'while :;do uptime;sleep 60;done'
This ensure connection stay up even if I dont hit a key for more than two minutes.
Here's a few thoughts that might help.
Sub-shells
Sub-shells fork new processes, but don't return control to the calling shell. If you want to fork a sub-shell to do the work for you, then you'll need to append a & to the line.
(ssh username#domain) &
But this doesn't look like a compelling reason to use a sub-shell. If you had a number commands you wanted to execute in order from each other, yet in parallel from the calling shell, then maybe it would be worth it. For example...
(dothis.sh; thenthis.sh; andthislastthingtoo.sh) &
Forking
I'm not sure why & isn't working for you, but it may be worth looking into nohup as well. This makes the command "immune" to hang up signals.
nohup ssh username#domain (try with and without the & at the end)
Passwords
Not storing passwords in the script is essential for any ssh automation. You can accomplish that using public key cryptography which is an inherent feature of ssh. I wont go into the details here because there are a number of great resources all across the interwebs on setting this up. I strongly suggest investigating this further.
HOWTO: set up ssh keys - Paul Keck, 2001
SSH Keys - archlinux.org
SSH with authentication key instead of password - Debian Administration
Secure Shell - Wikipedia, the free encyclopedia
If you do go this route, I also suggest running ssh in "batch mode" which will disable password querying and will automatically disconnect from the server if it becomes unresponsive after 5 minutes.
ssh -o 'BatchMode=yes' username#domain
Persistence
Then if you want to persist the connection, run some silly loop in bash! :)
ssh -o 'BatchMode=yes' username#domain "while (( 1 == 1 )); do sleep 60; done"
The problem with & is that ssh loses access to its standard input (the terminal), so when it goes to read something to send to the other side it either gets an error and exits, or is killed by the system with SIGTTIN which will implicitly suspend it. The -n and -f options are used to deal with this: -n tells it not to use standard input, -f tells it to set up any necessary tunnels etc., then close the terminal stream.
So the best way to do this is probably to do
ssh -L 9999:localhost:9999 -f host & # for some random unused port
and then manually kill the ssh before logout. Alternately,
ssh -L 9999:localhost:9999 -n host 'while :; do sleep 86400; done' </dev/null &
(The redirection is to make sure the SIGTTIN doesn't happen anyway.)
While you're at it, you may want to save the process ID and shut it down from your .logout/.bash_logout:
ssh -L 9999:localhost:9999 -n host 'while :; do sleep 86400; done' < /dev/null & echo $! >~.ssh_pid; chmod 0600 ~/.ssh_pid
and in .bash_logout:
if test -f ~/.ssh_pid; then
set -- $(sed -n 's/^\([0-9][0-9]*\)$/\1/p' ~/.ssh_pid)
if [ $# = 1 ]; then
kill $1 >/dev/null 2>&1
fi
rm ~/.ssh_pid
fi
The extra code there attempts to avoid someone sabotaging your ~/.ssh_pid, because I'm a professional paranoid.
(Code untested and may have typoes)
It's been a while since I've used ssh, and I can't test it right now, but have you tried the -f switch?
ssh -f username#domain
The man page says it backgrounds ssh. Not sure why & wouldn't work, but I guess it's interpreting it as a command to be run on the remote machine.
Maybe screen + ssh would fit the bill as well?
Something like:
screen -d -m -S sessionName cmd
screen -d -m -S sessionName cmd &
# reconnect with
screen -r sessionName

How can I find out what a command is executing in Terminal on MacOs

After I run a shell script (which call a bunch a other scripts depend on conditions. which is too complicated to understand), I can execute a command 'gdbclient' at my MacOS terminal.
But when I do 'which gdbclient' and 'alias gdbclient', it shows nothing.
Is there anyway for me to find out what 'gdbclient' is actually doing?
You can open up another terminal window and type: ps
That will list all the running processes.
If your script is running as a different user than the current one, you can use ps -ef to list all running processes.
If you know the PID of the process that ran your script, you can find all child processes via parent PID using ps -f | grep [pid]
You can use the Activity Monitor to check things out pretty thoroughly. To get the right privileges to see everything going on you can do:
sudo open /Applications/Utilities/Activity\ Monitor.app/
Dtrace can give you some helpful information: dtrace
to find process 'gdbclient':
ps aux | grep gdbclient
That wont tell you what it's "doing" but that it's running

Resources