Trying to kill process on port 8000 confusion - ruby

I am writing a Ruby script that deploys a server on port 8000 in the background, and then in the foreground I issue queries to the server. After I've issued my queries I kill the server, however when I kill the server, it seems to be switching ports.
I am doing it the following way in the ruby script:
To see PID that is running on port 8000:
lsof -i:8000 -t
Result:
RUNNING ON PORT 8000: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 26364 user1 84u IPv6 199069 0t0 TCP *:8000 (LISTEN)
To kill the server I issue the command:
kill 26364
I then see if anything is running on port 8000:
# check if killed
lsof -i:8000 -t
Result:
RUNNING ON PORT 8000: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ruby 25560 user1 58u IPv4 199123 0t0 TCP localhost:45789->localhost:8000 (ESTABLISHED)
java 26364 user1 84u IPv6 199069 0t0 TCP *:8000 (LISTEN)
java 26364 user1 85u IPv6 199124 0t0 TCP localhost:8000->localhost:45789 (ESTABLISHED)
I only want to kill the process that is listening on port 8000,
and keep my ruby script running.
Can someone please tell me what is going on? Why is it switching ports? How can I only kill my server port?

It doesn't look to me like it's switching ports; it's still listening on port 8000. It looks to me like two things are happening:
The java process (PID 26364) is catching or ignoring the kill signal (SIGTERM), and continuing to listen on port 8000.
A ruby process (PID 25560) is making a connection to localhost:8000 (from port 45789, which was probably dynamically allocated). That is, ruby is making a normal connection to the server on port 8000.
Note that the java process owns the port 8000 end of the localhost:8000<->localhost:45789 TCP session, and the ruby process owns the port 45789 end.
Whether the ruby process's connection is somehow a result of the kill signal, or just something it happened to do at about the same time, I couldn't tell you.

Related

Accessing tcp port 8080 externally on macos mojave

I am trying to access a listening tcp socket on my macbook from any external client on the same wi-fi lan.
This works for specific ports, eg. 8000, but not other ports, eg. 8080, 8081, 8082
How can I open up or access the 8080 tcp port externally?
Working steps on port 8000
Server
$ nc -lv 8000
Client
$ nc -z 192.168.101.98 8000
Connection to 192.168.101.98 port 8000 [tcp/irdmi] succeeded!
Non-working steps on port 8080
Server
$ nc -lv 8080
Client
$ nc -z 192.168.101.98 8080
(The command just hangs)
Diagnostics
$ lsof -P -i TCP:8000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nc 75782 ... 3u IPv4 0x5be3e11e5a732339 0t0 TCP *:8000 (LISTEN)
$ lsof -P -i TCP:8080
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nc 75952 ... 3u IPv4 0x5be3e11e581e2fb9 0t0 TCP *:8080 (LISTEN)
$ sudo pfctl -s all | grep Status
No ALTQ support in kernel
ALTQ related functions disabled
Status: Disabled Debug: Urgent
I am running macOS Mojave 10.14.1 (same behavior on 10.14 as well).
Update
I changed nothing, everything suddenly works. I am very curious what made the difference. Will close the question, if everything keeps working.
Solution
I had the Endpoint Security VPN client installed. This activates a firewall at all times blocking some ports. Even when not connecting to a vpn server.
By shutting down the client daemon, i can access all ports again.
Steps to shutdown daemon
From: https://gist.github.com/phoob/671e65332c86682d5674
kill client and run commands to stop daemon:
sudo launchctl unload /Library/LaunchDaemons/com.checkpoint.epc.service.plist
sudo kextunload /Library/Extensions/cpfw.kext

Able to open TCP port but not listening

Using Add rule in windows firewall, I was able to open TCP port 15537. When i am trying to executing command netstat -ano on terminal windows, this port is not listed. I tried to execute telnet command on terminal window (e.g. telnet IP port) but getting
Connecting To localhost...Could not open connection to the host, on port 15537: Connect failed
Then I downloaded PortQry application and execute it from different machine, this machine is also in the same network, the result I received was
"Not Listening".
I already spent more than 2 days and asked internal group but could not find solution.
Note: both machines are having Windows 10 OS.
No solution is needed as no problem is indicated in the question. You have opened a TCP port successfully. You have not made any attempt to cause anything to listen to that TCP port.
It's not clear what results you expected, but you got the results that you should have expected. Nothing is wrong. The port is open because you opened it. Nothing is listening on that port because you didn't set anything to listen on that port.
There may be some forwarding rules? Since the purpose of access is not on the local machine, the netstat command cannot see the port on listening, but it can see the next action based on this port, usually to do some forwarding
I am not very familiar with windows firewall configuration, but I know that if there is a forwarding rule in linux, like
-p tcp -m tcp --dport 8080 -j {other forwading chain}
we can not see 8080 listening on this host (netstat -tunpl), but telnet host:8080 may see connected
Use nmap instead of netstat for detecting opening port
nmap -p your_port_number your_local_ip
Run service on that port
For eg- In my case,in order to open port,I use
"service ssh start" or "service apache2 start "and it's open port 22 and 80 for connection respectively in my linux machine.
On using nmap in my lan network both ports opened.
Hope it help

What does it mean that my resource manager does not have an open port 8032?

I have my YARN resource manager on a different node than my namenode, and I can see that something is running, which I take to be the resource manager. Ports 8031 and 8030 are bound, but not port 8032, to which my client tries to connect.
I am on CDH 5.3.1, and the following is part of the output of lsof -i
java 12478 yarn 230u IPv4 61325 0t0 TCP hadoop2.adastragrp.com:48797->hadoop2.adastragrp.com:8031 (ESTABLISHED)
java 13753 yarn 159u IPv4 61302 0t0 TCP hadoop2.adastragrp.com:8031 (LISTEN)
java 13753 yarn 170u IPv4 61308 0t0 TCP hadoop2.adastragrp.com:8030 (LISTEN)
java 13753 yarn 191u IPv4 61326 0t0 TCP hadoop2.adastragrp.com:8031->hadoop2.adastragrp.com:48797 (ESTABLISHED)
How do I diagnose what's wrong here? I suspect that the resource manager is running, but can't bind to port 8032, but I have no idea why that could be.
In the cloudera manager, the ResourceManager is shown as having good health, but at the same time I get this report:
ResourceManager summary: hadoop2.adastragrp.com (Availability:
Unknown, Health: Good). This health test is bad because the Service
Monitor did not find an active ResourceManager.
[Edit]
I can execute yarn application -list locally on the resource manager node, but when I do the same on a different node, it tries to connect to the resource manager correctly, but fails to do so. Both nodes are connected, can ping each other, and so on. I disabled the iptables service on the VM.
nmap output:
PORT STATE SERVICE REASON
8032/tcp filtered unknown host-prohibited
Wether the port was occupied by other process? For example, you stop your hadoop cluster abnormally, result in some process still running. If so, try to ps -e|grep java,and kill it.
Gotcha, on CentOS 6 stopping the iptables service didn't really disable the firewall. I had to disable it with system-config-firewall.

Windows keeps a listening socket for a non-existent process indefinitely

On Windows, after process 628 (my app) has exited, tcpview shows:
Process PID Pro Local Address Local Port Remote Address Rem Port State
-------------- --- --- ------------- ---------- --------------- -------- -----------
<non-existent> 628 TCP 0.0.0.0 http 0.0.0.0 0 LISTENING
<non-existent> 628 TCP 0.0.0.0 https 0.0.0.0 0 LISTENING
<non-existent> 628 TCP 0.0.0.0 http x.x.x.x xxxxx ESTABLISHED
I was able to kill the ESTABLISHED connection with tcpview, but can't kill the LISTENING ones (as admin) with tcpview or CurrPorts. The LISTENING connections remained indefinitely (>24 hours), preventing the app from binding to port 80 and 443 when restarted ("[10048] Only one usage of each socket address (protocol/network address/port) is normally permitted").
When I added the SO_REUSEADDR option before binding the listening socket, the app still couldn't bind the ports, this time with "[10013] An attempt was made to access a socket in a way forbidden by its access permissions".
My Questions:
Does it make any sense for the listening socket to be kept after the owning process is gone? It's not in half-closed state, since no connection has been established.
Is it expected that a socket of a non-existent process will linger indefinitely?
Are these known bugs in Windows/Winsocks?
Thanks!

Phusion passenger can't free port

Ok, i'm running passenger standalone, and made a dumb mistake, now I need help fixing.
So a test app directory was running standalone passenger on a specific port, and I deleted the directory so that I could pull a new app in place of it (and use the same port). Well, not thinking about passenger at all, I should have stopped the daemon first. Well, now the port is tied up somewhere, and I cannot figure out for the life of me how to stop it. I found the process that the port started on, and killed it, but to no avail. The address is still bound and unusable.
Short of restarting the server (not really a viable solution for me), how can I kill that nginx / passenger process all together so that I can start a new instance of passenger on that port?
Run lsof -i :portnumber e.g. lsof -i :3000
You will get something like this ...
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
Passenger 10514 USERNAME 5u IPv4 0xea95336b89bfa931 0t0 TCP *:hbci (LISTEN)
Passenger 10515 USERNAME 5u IPv4 0xea95336b89bfa931 0t0 TCP *:hbci (LISTEN)
Stop the processes using kill PID ... something like kill 10514
Passenger Standalone starts Nginx for you, and that is what is actually bound to the port. Because you deleted the directory, Passenger Standalone cannot access Nginx's lock file or PID file, and that's why upon killing Passenger Standalone it could cannot kill Nginx for you. You should kill Nginx manually.

Resources