I need to kill process ID from an established nginx connection to worker process.
Is there a way to get PID from all nginx established connections?
If i do netstat on nginx worker processes, i am getting pids from worker processes which need to stay alive after I kill process that is connected to it.
I've tried with netstat -anp | grep "client_ip_address" | grep ESTABLISHED
and i am getting this:
tcp 0 0 client_ip:dest_port client_ip:source_port ESTABLISHED 15925/nginx: worker
so 15925 would be the process ID that needs to stay alive when i kill the connection to it.
Is there a way to do it?
I think maybe you're confusing Process IDs and Connections. Nginx starts a master process which then spawn off a handful of worker processes. You might only have (say) 5 workers on a fairly busy system.
As connections come in, nginx wakes up and one of the workers is assigned to that connection. From then on, any TCP traffic that flows goes from the remote client to that worker process. Workers can each handle a large number of connections. Most HTTP connections only last for a few seconds, so as they close, they make space for the worker to take on more new connections.
So... if you're trying to use the shell command 'kill', the best you could ever do would be to terminate one of the worker processes, which would close (potentially) a large number of connections.
If your aim is to disconnect one client, whilst leaving all the others connected you're out of luck. There isn't a way to do this with shell commands. If your HTTP connections are hanging around for a long time (like Websockets do, for example), then its possible you could write something on the application side which allows you to close connections that you don't like.
One more thing you may be thinking of is to close connections from places you don't like (a sort of 'spam' blocker). The more usual way to do this is just to reject the connection out-right so it uses as few of your resources as possible. Again, this is something you can do dynamically on the application side, or else you could put something like Naxsi (https://github.com/nbs-system/naxsi/wiki) and fail2ban together (https://github.com/nbs-system/naxsi/wiki/A-fail2ban-profile-for-Naxsi).
I use grep's PERL regex to achieve things like this.
$ PID_TO_KILL=`netstat -anp | grep "client_ip_address" | grep ESTABLISHED | grep -Po "(?<=ESTABLISHED).*(?=\/nginx)"`
$ kill -9 $PID_TO_KILL
Related
I'm using Ubuntu 18.04 LTS with bash 4.4.20.
I'm trying to create a little daemon to schedule data transmission between threads.
On the server I am doing this:
ncat -l 2001 -k -c 'xargs -n1 ./atc-worker.sh'
On the client I am doing this:
echo "totally-legit-login-token" | nc 127.0.0.1 2001 -w 1
And it works well!
Here is the response:
LaunchCode=1589323120.957093305 = Now=1589323120.957093305 = URL=https://totally-legit-url.com/ = AuthToken=totally-legit-auth-token = LastID=167
When the server receives a request from a client, it calls my little atc-worker.sh script. The server spits out a single line of text and it is back to business, serving other clients.
Thanks to the -k option, the server listens continuously. Multiple clients can connect at the same time. The only problem, is that I cannot end the connection programmatically. I need the daemon -k functionality on the server to answer requests from the clients, but I need the clients to quit listening after receiving a response and get on to their other work.
Is there an EOF signal/character I can send from my atc-worker.sh script that would tell nc on the client side to disconnect?
On the client, I use the -w 1 option to tell the client to connect for no more than a second.
But this -w 1 option has some drawbacks.
Maybe a second is too long. The connection should just take ~150 milliseconds and waiting out the rest of the second slows each client down even if it already has its answer. And -as I said before- the client has chores to do! The client shouldn't be wasting its time after it has its answer!
Bad actors Rogue clients could connect to the server that have no intention to close out in a timely manner and I want the server to have better control and shut down bad actors.
Maybe a second is too short. atc-worker.sh has a mechanism to wait for a lock file to be removed if there is one. If that lock file is there for more than a second, the connection will close before the client can receive its response.
Possible solutions.
The atc-worker.sh script could send a magic character set to terminate the connection. Problem solved.
On the client-side set of solutions, maybe curl would be a suitable choice instead of nc? But it would not solve my concern of being able to deal with bad actors. Maybe these are two different problems? Client-side closing the connection immediately after an answer is received, and server-side dealing with bad actors who will use what ever clients they choose.
Maybe use expect? I'm investigating that now.
Thanks in advance!
OK. After a lot of digging, I found someone else with a similar problem. Here is a link to his answer.
Original question: Client doesn't close connection to server after receiving all responses
Original Answer: https://stackoverflow.com/a/50528286/3055756
Thanks #Unyxos
I modified my daemon to send an "ENDRESPONSE" line when it was done even though it does not drop the connection. And I modified the client to look for that "ENDRESPONSE" line. When the client gets the line, it drops the connection using the logic that #Unyxos uses below in his answer.
Here is his simple and elegant answer:
Finally found a working way (maybe not the best but at least it's perfectly doing what I want :D)
after all functions I send a "ENDRESPONSE" message, and on my client, I test if I have this message or not :
function sendMessage {
while read line; do
if [[ $line == "ENDRESPONSE" ]]; then
break
else
echo $line
fi
done < <(netcat "$ipAddress" "$port" <<< "$*")
}
Anyway thanks for your help, I'll try to implement other solutions later !
I made a UDP server class and my program creates a process (running in the background). It is a command line utility, and so running 'udpserver.exe start' would bind the socket and begin a blocking recvfrom() call inside a for(;;) loop.
What is the best way to safely and 'gracefully' stop the server?
I was thinking about 'udpserver.exe stop' would send a udp msg such as 'stop' and the ongoing process from 'udpserver.exe start' would recognize this msg, break from the loop, and clean up (closesocket/wsacleanup).
Also, is just killing the process not a good idea?
Why are you running the UDP server as an external process instead of as a worker thread inside of your main program? That would make it a lot easier to manage the server. Simple close the socket, which will abort a blocked recvfrom(), thus allowing the thread to terminate itself.
But if you must run the UDP server in an external process, personally I would just kill that process, quick and simple. But if you really want to be graceful about it, you could make the server program handle CTRL-BREAK via SetConsoleCtrlHandler() so it knows when it needs to close its socket, stopping recvfrom(). Then you can have the main program spawn the server program via CreateProcess() with the CREATE_NEW_PROCESS_GROUP flag to get a group ID that can then be used to send CTRL-BREAK to the server process via GenerateConsoleCtrlEvent() when needed.
I am facing issue with c++ service which uses port 30015.It runs fine,but sometime it fails to start as the port 30015 is occupied and bind fails with error WSAEADDRINUSE.
I ran netstat command to know the port status
netstat -aon | findstr 30015
Output:
TCP 0.0.0.0:30015 0.0.0.0 LISTENING 6740
I checked the PID 6740 in task manager,this PID is not be taken by an process.
After searching in the net, I used TCPVIEW to see the status of the port. TCPView is showing port in listening mode and process name is "non-existance".
Application basically compress,decompress the file using 7za. Application listen on 30015 port for request and than create a child process and pass the commandline to run 7za command to compress and decompress file.
Here child process doesn't uses socket. Server runs on the main thread and listen on port 30015. This problem comes after restart of the server.
Here child process does not use socket as such. Do I need to make bInheritHandle = FALSE ?
Are you sure? This all sounds very confused. It's not possible for netstat to show a socket in the LISTEN state but for there to be no process -- especially if it shows the pid! You're confused because the process simply exited by the time you looked in Task Manager. All TCP connections in netstat are associated with a running process (except for unusual cases like TIME-WAIT sockets). So, find out which process has the socket open.
Secondly, I think you're trying to say that using bInheritHandles=TRUE as an argument to CreateProcess can lead to handle leaks. Only you have your code -- why not just look at the handles in your child and see if you do have a leak? It is only possible to use bInheritHandles=TRUE with great discipline, in the hands of novice programmers it will only lead to bugs. Create a named pipe with a suitable security descriptor, pass the name on the commandline to the child, and connect back, rather than using handle inheritance which is much too coarse-grained.
Finally, just to make sure, you do know to bind listening sockets with SO_REUSEADDR to prevent conflicting with active sockets using the same port? (SO_REUSEADDR still won't let two passive sockets be created on the same address/port combination, although it is a bit broken on Windows.)
Yes this can happen on Windows. If you've created a child process that inherits handles from the parent process then that includes TCP server sockets in the LISTEN state that will always be listed as owned by the parent PID even after that PID has died.
These sockets will disappear when all child processes that you spawned have exited, causing the reference count on their handles to reach zero.
From a security standpoint you should not use inter-process handle-inheritance, particularly when launching a 3rd part application, unless you have a good reason to need the feature.
I have processes that after started, bind to an address and port. These processes are run in screen using exec so that the screen closes when the child process closes.
When killing the process, I use kill -9 PID, but sometimes the screen ends, yet when I restart the process, the old process is still using the port, and I have to terminate the process again.
I've also read that SIGKILL leaves sockets open, stale memory, random resources in use, so I turned to just plain kill PID, which is a SIGTERM.
Is a SIGTERM guaranteed to allow the process to unbind from the address and port, or is there a better alternative?
If you SIGKILL all the processes that keep a listening port open, it is guaranteed to close.
However, it might not close for a few minutes, while it's in the TIME_WAIT state, as required by the TCP specification (to let listening clients know the port is closed in case they miss the original closing packet).
Well behaved servers will open the socket with the option SO_REUSEADDR, allowing it to reclaim the same port on restart immediately, but this is application specific. Without it, the port will appear to be in use for a few minutes.
I am trying to reverse engineer a third-party TCP client / server Windows XP, SP 3 app for which I have no source available. My main line of attack is to use WireShark to capture TCP traffic.
When I issue a certain GUI command on the client side, the client creates a TCP connection to the server, sends some data, and tears down the connection. The server port is 1234, and the client port is assigned by the OS and therefore varies.
WireShark is showing that the message corresponding to the GUI command I issued gets sent twice. The two messages bear a different source port, but they have the same destination port (1234, as mentioned previosuly).
The client side actually consists of several processes, and I would like to determine which processes are sending these messages. These processes are long-lived, so their PIDs are stable and known. However, the TCP connections involved are transient, lasting only a few milliseconds or so. Though I've captured the client-side port numbers in WireShark and though I know all of the PIDs involved, the fact the connections are transient makes it difficult to determine which PID opened the port. (If the connections were long-lived, I could use netstat to map port numbers to PIDs.) Does anybody have any suggestions on how I can determine which processes are creating these transient connections?
I can think of two things:
Try sysinternals' tcpview program. It gives a detailed listing of all tcp connections opened by all the processes in the system. If a process creates connections, you will be able to see them flash (both connect and disconnect are flashed) in tcpview and you will know which processes to start looking into.
Try running the binary under a debugger. Windbg supports multi-process debugging (so does visual studio I think). You may have only export symbols to work with but that should still work for calls made to system dlls. Try breaking on any suspected windows APIs you know will be called by the process to create the connections. MSDN should have the relevant dlls for most system APIs documented.
Start here... post a follow-up if you get stuck again.
I ended up creating a batch file that runs netstat in a tight loop and appends its output to a text file. I ran this batch file while running the system, and by combing through all of the netstat dumps, I was able to find a dump that contained the PIDs associated with the ports.