so I'm running a WebSocket on my node, it's running on port 80 through Python. And the problem is when many requests are coming the WebSocket script or port 80 is being terminated. I created a checker by use of bash script and it's running every 3 mins as cron job using netcat to check if port 80 is running and if not re-run the script so my question is, is there any good method to keep websocket script running or keep port 80 running because 3 minutes is not enough since it's dying fastly. I heard about systemd service is it better than running cron job every 3 minutes? If yes can you send me some sample on how to do it. Thank you!
Using checker (bash script) via cron job, through checking port 80 on netcat if open then if not re-run the script every 3 mins but it is not enough
It would be usefull to provide your websocket server code example. I guess you are using something like websockets + asyncio, such as this echo server:
#!/usr/bin/env python
import asyncio
import websockets
async def echo(websocket):
async for message in websocket:
await websocket.send(message)
async def main():
async with websockets.serve(echo, "localhost", 80):
await asyncio.Future() # run forever
asyncio.run(main())
Using systemd can be usefull for you since your server is crushing. You can setup your script to be run as a service possibly with watchdog timeout, you can check some conditions on timer and use sdnotify to keep the watchdog from restarting your service while it is running properly.
I fixed my issue. I created a systemd with of course still same checker that is running every 10 secs. Now I'm gonna observe if this will be good
Related
I wrote a Go program which doesn't need to retrieve external http calls at all by default. I tried to deploy it on Google Cloud Run and received the following error:
The user-provided container failed to start and listen on the port
defined provided by the PORT=8080 environment variable. Logs for this
revision might contain more information.
I understand it happens because my code doesn't provide a port. As this answer states:
container must listen for incoming HTTP requests on the port
that is defined by Cloud Run and provided in the $PORT environment
variable
My question is what can I do if wouldn't like define any ports and just want to run the same code I run locally? Is there an alternate solution to deploy my code without it, or I must add it anyway if I want run the code from Cloud Run?
For containers that do not require an HTTP listener (HTTP server), use Cloud Run Jobs.
Cloud Run Jobs is in preview.
Your Go program must exit with exit code 0 for success and non-zero for failure.
Your container should not listen on a port or start a web server.
Environment variables are different from Cloud Run.
Container instances run until the container instance exits, until the task timeout is reached, or until the container crashes. Task timeout default is 10 minutes, max is one hour.
Cloud Run - Create jobs
I use Erlang ftp lib in my elixir project to send file to ftp server.
I call send function :ftp.send(pid, '#{local_path}', '#{remote_path}') to upload file to ftp server.
Most of the time it uploads files successfully, but it sometimes stuck here, not moving to the next line.
According to the docs it should return :ok or {:error, reason}, but simply stuck at :ftp.send.
Can anyone give me suggestion? I am not familiar with Erlang.
Version: Elixir 1.7.3 (compiled with Erlang/OTP 21)
ftp module has two types of timeout, both set during the initialization of ftp service.
Here is an excerpt from the documentation:
{timeout, Timeout}
Connection time-out. Default is 60000 (milliseconds).
{dtimeout, DTimeout}
Data connect time-out. The time the client waits for the server to connect to the data socket. Default is infinity.
Data connect time-out has a default value of infinity, meaning it’d be hang up if there are some network issues. To overcome the problem, I’d suggest you set this value to somewhat meaningful and handle timeouts in your application appropriately.
{:ok, pid} = :ftp.start_service(
host: '...', timeout: 30_000, dtimeout: 10_000
)
:ftp.send(pid, '#{local_path}', '#{remote_path}')
I'm currently trying to perf test my application running in production. Specifically, I'm trying to see how many ssl connections my jetty server can handle. Let's call this host my.webserver.prod.com and it expects secure traffic on port 443. I would like to write a bash script that I can run from another host and have it generate as many ssl connections as possible. How can I do this?
What have you tried till now?
Still you could checkout this tool that bash has ab. Its apahce benchmarking.
This helps in checking performance. And In the end it provides you a summary.
The command goes like :
ab -n 10 -c 2 http://my.webserver.prod.com
Where
-n -> total number of requests to be made
-c -> number of concurrent requests
You could do more resarch on this.
Hope this helps.
I made a UDP server class and my program creates a process (running in the background). It is a command line utility, and so running 'udpserver.exe start' would bind the socket and begin a blocking recvfrom() call inside a for(;;) loop.
What is the best way to safely and 'gracefully' stop the server?
I was thinking about 'udpserver.exe stop' would send a udp msg such as 'stop' and the ongoing process from 'udpserver.exe start' would recognize this msg, break from the loop, and clean up (closesocket/wsacleanup).
Also, is just killing the process not a good idea?
Why are you running the UDP server as an external process instead of as a worker thread inside of your main program? That would make it a lot easier to manage the server. Simple close the socket, which will abort a blocked recvfrom(), thus allowing the thread to terminate itself.
But if you must run the UDP server in an external process, personally I would just kill that process, quick and simple. But if you really want to be graceful about it, you could make the server program handle CTRL-BREAK via SetConsoleCtrlHandler() so it knows when it needs to close its socket, stopping recvfrom(). Then you can have the main program spawn the server program via CreateProcess() with the CREATE_NEW_PROCESS_GROUP flag to get a group ID that can then be used to send CTRL-BREAK to the server process via GenerateConsoleCtrlEvent() when needed.
I have a test driver program that launches a separate test server process. The test server process listens on a local port, and after it's ready, the test driver runs a test that accesses the test server.
Currently the test driver repeatedly tries to connect to the local port (loop some, sleep some, try again). It's not an optimal solution, and is clearly unreliable.
Is it possible to wait for some event that says "somebody listens on a local port"? Trying to connect to early results in a "port closed" error.
I'd like to implement the solution on Windows, Linux, and Mac OS X. If you have some tips for any of these systems, it's welcome (it's probably going to be system-specific in each case).
On Windows I use a named event for this kind of thing.
The test harness can create the event and communicate the name of the event to the server that it launches; it then waits on the event to be signalled before continuing the test. The server then connects to the event, initialises itself and once it's ready to accept connections it signals the event.
Well, if you launch the server process, you can intercept the stdout of the server right?
So have the server output "server started" when the socket ready. The driver should wait until the server sends this string to stdout, then try to connect to the server port.