I use Erlang ftp lib in my elixir project to send file to ftp server.
I call send function :ftp.send(pid, '#{local_path}', '#{remote_path}') to upload file to ftp server.
Most of the time it uploads files successfully, but it sometimes stuck here, not moving to the next line.
According to the docs it should return :ok or {:error, reason}, but simply stuck at :ftp.send.
Can anyone give me suggestion? I am not familiar with Erlang.
Version: Elixir 1.7.3 (compiled with Erlang/OTP 21)
ftp module has two types of timeout, both set during the initialization of ftp service.
Here is an excerpt from the documentation:
{timeout, Timeout}
Connection time-out. Default is 60000 (milliseconds).
{dtimeout, DTimeout}
Data connect time-out. The time the client waits for the server to connect to the data socket. Default is infinity.
Data connect time-out has a default value of infinity, meaning it’d be hang up if there are some network issues. To overcome the problem, I’d suggest you set this value to somewhat meaningful and handle timeouts in your application appropriately.
{:ok, pid} = :ftp.start_service(
host: '...', timeout: 30_000, dtimeout: 10_000
)
:ftp.send(pid, '#{local_path}', '#{remote_path}')
Related
I am writing a Golang ssh/sftp client which connects to a sftp server with a slowness in connecting and writing files, using golang.org/x/crypto/ssh package. I need to set Connection timeout and SO timeout (as we do in Java JSCH library).
First to achieve Connection timeout I was using ssh.ClientConfig.Timeout, but only worked for nanosecond and microsecond values, not for milliseconds and above, where I needed to set 5 seconds. According to the API doc also I assume ssh.ClientConfig.Timeout is used only for TCP socket connection creation and ssh handshake is not included there.
So then I tried net.Conn.SetDeadline() and it was for end-to-end connection creation + writing file + connection closing. Since this is also not fine, tried net.Conn.SetWriteDeadline() which looks like SO timeout (applied in TCP packet level) but timeout error is not appeared just after the duration elasped, instead comes out after the server's late reply or subsequent write operation starts.
So can someone please show the correct way of setting Connection timeout and SO timeout in Golang ssh package or tell whether this is supported or not?
I am building TCP Proxy: client <-> proxy <-> Vertica
I have a net.TCPListener, which takes incoming requests by AcceptTCP() and creating connections, then, making connection to destination socket by net.DialTCP("tcp", nil, raddr). Looks like a bridge. Default proxy model.
Firstly, at first version, i have a trouble: if i have 59 parallel incoming request, everything is fine. But if i have one more (60), i have a trouble: 1-59 connections are OK, but 60 and newer are fault. I cant catch error properly. Looks like some socket unexpectedly closes
Secondly, i tried to set queue for listener. It helps me a lot: but if i have more than 258 requests, i get error again.
My question: is there any limit of connections in net package? May be it is system limitation?
For external info: Vertica running in docker container, hw/system: macbook, vertica limit connection pool: 5, but pool logic implemented into proxy.
I also tried set "raw" proxy without pool logic (thats why i set queue for listener: i must not exceed threshold of Vertica User's pool), result is 258 requests..
UPDATED: (05.04.2020)
Looks like it is system limitations fault. Did I mention anywhere that I trying to run the whole system on one PC?
So, what I had:
300 parallel processes as requests (making by multiprocessing.Pool
Python) (300 sockets)
Listener that creates 300 connections (once
more 300 sockets)
And series of rapidly creating/closing sockets in
deep of proxy (according to queue and Vertica pool)
What I have now:
300 python requests making from another PC in my local network (on Windows)
Proxy works fine
But I have several errors on Windows PC, which creating requests to my proxy. Errors like low memory in "swap file".
I still need to make some stress test for proxy. Adding less memory for swap file didn't solve my problem on Windows PC. I will be grateful for any suggestions and ideas. Thanks!
How does the proxy connect to Vertica?
There is by default a maximum of 50 ordinary mortal users to be connected to one Vertica node at any one time. The superuser "dbadmin" always has 5 connections in addition to that.
So if I try to connect 60 times as dbadmin, I get this on a default Vertica configuration:
Connection attempt failed: FATAL 4060: New session rejected due to limit, already 55 sessions active
You can increase the Vertica config item MaxClientSessions from its default of 50 per node.
Command is : ALTER NODE <_node_name_> SET MaxClientSessions = 100, for example.
I suppose you are always connecting to the same Vertica node, and that you have set ConnectionLoadBalancing to FALSE. So you always connect to the same node, and soon reach the default maximum of 50.
Hope that's the reason found ....
All required changes have been done to respective files like:
stalecheck=true,
keepalive is checked from HTTP request defaults,
retrycount=1,
hc.parameters file changes,
Socket timeout is 240000
Still we see "java.net.SocketException: Connection reset" in response data however I see the valid requests been passed to Server.
The issue wasnt till we reach 3000 users, worked smoothly till 3000 users.
Connection Reset has a lot of meaning, possible reasons are:
One of the server components is not able to handle load so it closes connections on its side
On JMeter side, check that you running in NON GUI mode and that neither JMeter JVM nor injector machine are overloaded which could explain this. See:
https://jmeter.apache.org/usermanual/get-started.html#non_gui
I'm using beanstalkd to offload some work to other machines. The setup is a bit unusual, the server is on the internet (public ip) but the consumers are behind adsl lines on some peoples homes. So there is a linux server as client going out through a dynamic ip and connecting to the server to get a job. It's all PHP and I'm using pheanstalk library.
Everything runs smoothly for some time, but then the adsl changes the IP (every 24h hours the provider forces a disconnect-reconnect) the client just hangs, never to go out of "reserve".
I thought that putting a timeout on the reserve would help it, but it didn't. As it seems, the client issues a command and blocks, it never checks the timeout. It just issues a reserve-with-timeout (instead of a simple reserve) and it is the servers responsibility to return a TIME_OUT as the timeout occurs. The problem is, the connection is broken (but the TCP/IP doesn't know about that yet until any of the sides try to talk to the other side) and if the client blocked reading, it will never return.
The library seems to have support for some kind of timeouts locally (for example when trying to connect to server), but it does not seem to contemplate this scenario.
How could I detect the stale connection and force a reconnect? Is there some kind of keepalive on the protocol (and on the pheanstalk itself)?
Thanks!
You could try to close each connection right after the request is answered and reopen a new connection each time.
There is no close() function but you deleting the Pheanstaly Object with unset($pheanstalk) will close it.
This explanation is quite helpful:
Pheanstalk (PHP client for beanstalk) - how do connections work?
I haven't tried it yet, but I came up with the idea of connecting to the beanstalk server through an SSH tunnel. We can enable the ServerAliveCountMax and ServerAliveInterval options on the tunnel, so that a network or server failure will cause the tunnel to close. This should then cause the pheanstalk client to report an error.
When trying to receive a (large, approx. 100MB) file using an FTP adapter in BizTalk 2006, we run into the following problem, which causes the file to be processed over and over again.
Retrieving the file succeeds; it is placed into the MessageBox and processed properly
When the FTP adapter issues the DELE statement, it never reaches the FTP server the file is on (we have verified this by taking a look at the FTP server's logs)
there are no signs of timeouts on the FTP server; the FTP server log does not mention a timeout occurring
After the interval time set on the adapter expires, the FTP server will still find the large file that we have already processed in the previous run, because the DELE statement failed
The event log in BizTalk states that ‘The connection to the FTP server was broken prematurely’. That is why we think there is a timeout issue.
We have seen that retrieval of the file takes around 35 minutes. The FTP server timeout is set to 1 hour. no problems there I guess.
Then we found the following article: http://www.ncftp.com/ncftpd/doc/misc/ftp_and_firewalls.html#FirewallTimeouts. It states that a firewall / routing device might be responsible for the timeouts. The team managing our firewalls and routers told us that there were no timeouts set here.
Which leaves us in the dark on the cause of our problem. Does anyone of you have any suggestions? Or even better, the solution!!
Have you tried the solutions in this article?
I avoid using the FTP adapter. Instead I use a third party utility to retrieve files and move the transferred file to a file adapter receive location. Third party utilities allow you to configure rules, recovery actions etc, freeing BizTalk from having to manage the transfer.