I'm using Ubuntu 18.04 LTS with bash 4.4.20.
I'm trying to create a little daemon to schedule data transmission between threads.
On the server I am doing this:
ncat -l 2001 -k -c 'xargs -n1 ./atc-worker.sh'
On the client I am doing this:
echo "totally-legit-login-token" | nc 127.0.0.1 2001 -w 1
And it works well!
Here is the response:
LaunchCode=1589323120.957093305 = Now=1589323120.957093305 = URL=https://totally-legit-url.com/ = AuthToken=totally-legit-auth-token = LastID=167
When the server receives a request from a client, it calls my little atc-worker.sh script. The server spits out a single line of text and it is back to business, serving other clients.
Thanks to the -k option, the server listens continuously. Multiple clients can connect at the same time. The only problem, is that I cannot end the connection programmatically. I need the daemon -k functionality on the server to answer requests from the clients, but I need the clients to quit listening after receiving a response and get on to their other work.
Is there an EOF signal/character I can send from my atc-worker.sh script that would tell nc on the client side to disconnect?
On the client, I use the -w 1 option to tell the client to connect for no more than a second.
But this -w 1 option has some drawbacks.
Maybe a second is too long. The connection should just take ~150 milliseconds and waiting out the rest of the second slows each client down even if it already has its answer. And -as I said before- the client has chores to do! The client shouldn't be wasting its time after it has its answer!
Bad actors Rogue clients could connect to the server that have no intention to close out in a timely manner and I want the server to have better control and shut down bad actors.
Maybe a second is too short. atc-worker.sh has a mechanism to wait for a lock file to be removed if there is one. If that lock file is there for more than a second, the connection will close before the client can receive its response.
Possible solutions.
The atc-worker.sh script could send a magic character set to terminate the connection. Problem solved.
On the client-side set of solutions, maybe curl would be a suitable choice instead of nc? But it would not solve my concern of being able to deal with bad actors. Maybe these are two different problems? Client-side closing the connection immediately after an answer is received, and server-side dealing with bad actors who will use what ever clients they choose.
Maybe use expect? I'm investigating that now.
Thanks in advance!
OK. After a lot of digging, I found someone else with a similar problem. Here is a link to his answer.
Original question: Client doesn't close connection to server after receiving all responses
Original Answer: https://stackoverflow.com/a/50528286/3055756
Thanks #Unyxos
I modified my daemon to send an "ENDRESPONSE" line when it was done even though it does not drop the connection. And I modified the client to look for that "ENDRESPONSE" line. When the client gets the line, it drops the connection using the logic that #Unyxos uses below in his answer.
Here is his simple and elegant answer:
Finally found a working way (maybe not the best but at least it's perfectly doing what I want :D)
after all functions I send a "ENDRESPONSE" message, and on my client, I test if I have this message or not :
function sendMessage {
while read line; do
if [[ $line == "ENDRESPONSE" ]]; then
break
else
echo $line
fi
done < <(netcat "$ipAddress" "$port" <<< "$*")
}
Anyway thanks for your help, I'll try to implement other solutions later !
Related
I'm currently trying to perf test my application running in production. Specifically, I'm trying to see how many ssl connections my jetty server can handle. Let's call this host my.webserver.prod.com and it expects secure traffic on port 443. I would like to write a bash script that I can run from another host and have it generate as many ssl connections as possible. How can I do this?
What have you tried till now?
Still you could checkout this tool that bash has ab. Its apahce benchmarking.
This helps in checking performance. And In the end it provides you a summary.
The command goes like :
ab -n 10 -c 2 http://my.webserver.prod.com
Where
-n -> total number of requests to be made
-c -> number of concurrent requests
You could do more resarch on this.
Hope this helps.
I need to kill process ID from an established nginx connection to worker process.
Is there a way to get PID from all nginx established connections?
If i do netstat on nginx worker processes, i am getting pids from worker processes which need to stay alive after I kill process that is connected to it.
I've tried with netstat -anp | grep "client_ip_address" | grep ESTABLISHED
and i am getting this:
tcp 0 0 client_ip:dest_port client_ip:source_port ESTABLISHED 15925/nginx: worker
so 15925 would be the process ID that needs to stay alive when i kill the connection to it.
Is there a way to do it?
I think maybe you're confusing Process IDs and Connections. Nginx starts a master process which then spawn off a handful of worker processes. You might only have (say) 5 workers on a fairly busy system.
As connections come in, nginx wakes up and one of the workers is assigned to that connection. From then on, any TCP traffic that flows goes from the remote client to that worker process. Workers can each handle a large number of connections. Most HTTP connections only last for a few seconds, so as they close, they make space for the worker to take on more new connections.
So... if you're trying to use the shell command 'kill', the best you could ever do would be to terminate one of the worker processes, which would close (potentially) a large number of connections.
If your aim is to disconnect one client, whilst leaving all the others connected you're out of luck. There isn't a way to do this with shell commands. If your HTTP connections are hanging around for a long time (like Websockets do, for example), then its possible you could write something on the application side which allows you to close connections that you don't like.
One more thing you may be thinking of is to close connections from places you don't like (a sort of 'spam' blocker). The more usual way to do this is just to reject the connection out-right so it uses as few of your resources as possible. Again, this is something you can do dynamically on the application side, or else you could put something like Naxsi (https://github.com/nbs-system/naxsi/wiki) and fail2ban together (https://github.com/nbs-system/naxsi/wiki/A-fail2ban-profile-for-Naxsi).
I use grep's PERL regex to achieve things like this.
$ PID_TO_KILL=`netstat -anp | grep "client_ip_address" | grep ESTABLISHED | grep -Po "(?<=ESTABLISHED).*(?=\/nginx)"`
$ kill -9 $PID_TO_KILL
The terminal does not pop out any error message, but I never receive the email.
this is my code:
mail -s "hello" "example#example.com" <<EOF
hello
world
EOF
Works fine for me:
pax> mail -s "hello" "pax" <<EOF
hi there
EOF
pax> mailx
Mail version 8.1.2 01/15/2001. Type ? for help.
"/var/mail/pax": 1 message 1 new
>N 1 pax#paxbox.com Sat Jun 14 10:25 16/629 hello
& _
You should try it with a local address first (as I have) to see if a mail is being created.
Beyond that, you should realise that mail simply adds mail messages into the mail system. If you want to find out what happens after that, you'll need to look into whatever MTAs (mail transfer agents) you have set up on your system.
If the MTA itself fails, you'll almost certainly get a mail back to the sending account stating so (you can use mailx as I have above, to discover this).
Since you haven't specified your systems, I'll give advice below based on Debian since that's what I'm used to.
On my Debian box, exim is the MTA but, by default, it does not support sending to remote domains. You can modify this by running:
sudo dpkg-reconfigure exim4-config
but you need to be careful not to relay emails lest you unknowingly become a spam-bot. More details can be found here.
You may find, if you want them to go to the outside world, that it's better to send them to your ISP via SMTP rather than trying to configure mail on your local box to do it.
However, if you want to go the mail route, simply run dpkg-reconfigure as above, select "Internet site; mail is sent and received directly using SMTP" as the answer to the first question, then accept defaults for all the other questions (checking to ensure you only accept mail from your local addresses 127.0.0.1 and ::1).
Then wait for exim to restart and try send the mail again.
Just be aware that exim typically starts queue runners (the processes that actually send out your email) on a schedule (30 minutes for me) so it may take some time for the message to go out.
You can examine the files in /var/log/exim4 to see what's happening (such as, in my case, my ISP rejecting the attempt since it knows nothing about pax#paxbox.com but you may be able to find an open SMTP relay somewhere or spoof your sending details to something your ISP will allow).
Has anyone ever looked into forming and sending test UDP packets from Bash? I need to test some UDP ports in addition to TCP ports in a piece of code. TCP is easy since its connection oriented. UDP on the other hand is a little more challenging. I would assume the packet would have to be built and sent out, then bash would have to wait for a reply, or timeout to determine if the port is open on the other end. Other utilities can be used, however I'd like to try to avoid it since it was so easy to do straight with Bash for TCP. Any thought on how to do this? The goal is a port check tool to monitor servers. Yes there are other tools out there like NMAP, but I don't need a complex port scanner.
* UPDATE *
Took Barmar's suggestion and tried using netcat, but I cant get it to work with UDP. The terminal just gets stuck whenever I try UDP:
netcat -uvz 8.8.8.8 53
This doesnt work either and this is a straight example from Google
nc -vnzu server.ip.address.here 1-65535 > udp-scan-results.txt
* UPDATE *
For those who say "you can't do that with netcat", can you explain why all these people say you can? Whats the deal netcat...
http://www.radarearth.com/content/using-netcat-udp-port-troubleshooting
https://gist.github.com/benhosmer/2429640
http://www.thegeekstuff.com/2012/04/nc-command-examples/
* UPDATE *
Here is yet another:
http://mikeberggren.com/post/16433061724/netcat
* UPDATE *
And here is some other evidence that contradicts everything above... sigh
Very bottom of page, under "Caveats"
http://linux.die.net/man/1/nc
* SOLUTION *
This ultimately helped me understand my own logic. Hopefully it will help others out there too.
http://serverfault.com/questions/416205/testing-udp-port-connectivity
There are different tools that allow for sending UDP packets, e.g. hping and nmap.
hping host.example.com --udp -V -p 53
nmap -sU -p 53 host.example.com
However, unlike TCP UDP does not establish a connection, i.e. you're don't automatically get a response.
I use SSH in a terminal window many times during a day.
I remember reading about a way to reuse a single connection so that the TCP and SSH handshaking don't have to happen every time I establish another request to the same host.
Can someone point me to a link or describe how to establish a shared ssh connection so that subsequent connections to the same host will connect quickly?
Thanks.
Answering my own question. Improving SSH (OpenSSH) connection speed with shared connections describes using the "ControlPath" configuration setting.
UPDATE: For connections that are opened and closed often add a setting like ControlPersist 4h to your ~/.ssh/config. See this post about SSH productivity tips.
If you want to keep the terminal opened, the easiest way is producing I/O ("tail -f" or "while [ -d . ]; cat /etc/loadavg; sleep 3; done").
If you want to improve the connection handshake one way I use is adding "UseDNS no" on your sshd_config.