Issue with GSM FTP file upload - ftp

We were trying to upload image file to the FTP server using GSM. We are able to upload small .text files. (less than 1KB).
But we are not able to upload files(image file of size 10kb) to the server as when we are trying to connect to server we are getting the following return command
"
AT+FTPPUT=1
OK
+FTPPUT:1,300
"
from our knowledge, we can upload file with size of 1300 bytes.
How we can upload larger files (around 10kb) to the server?
Do we need to split the files? (we tried that but recombining may cause error).
I request your support in this regards. Thanks in advance.....

Related

Laravel Lumen directly Download and Extract ZIP file to Google Cloud Storage

My goal is to download a large zip file (15 GB) and extract it to Google Cloud using Laravel Storage (https://laravel.com/docs/8.x/filesystem) and https://github.com/spatie/laravel-google-cloud-storage.
My "wish" is to sort of stream the file to Cloud Storage, so I do not need to store the file locally on my server (because it is running in multiple instances, and I want to have the disk size as small as possible).
Currently, there does not seem to be a way to do this without having to save the zip file on the server. Which is not ideal in my situation.
Another idea is to use a Google Cloud Function (eg with Python) to download, extract and store the file. However, it seems like Google Cloud Functions are limited to a max timeout of 9 mins (540 seconds). I don't think that will be enough time to download and extract 15GB...
Any ideas on how to approach this?
You should be able to use streams for uploading big files. Here’s the example code to achieve it:
$disk = Storage::disk('gcs');
$disk->put($destFile, fopen($sourceZipFile, 'r+'));

Flutter web: Download large files by reading a stream from the server?

There are already several articles about starting downloads from flutter web.
I link this answer as example:
https://stackoverflow.com/a/64075629/15537341
The procedure is always similar: Request something from a server, maybe convert the body bytes to base64 and than use the AnchorElement to start the download.
It works perfectly for small files. Let's say, 30MB, no problem.
The whole file has to be loaded into the browser first, than the user starts the download.
What do to if the file is 10GB?
Is there a way to read a stream from the server and write a stream to the users download? Or is an other way preferable like to copy the file to a special folder that is directly hosted by the webserver?

What is correct way for an FTP server to prevent corrupted uploaded files because of late append?

Using pureftpd I uploaded 1% of a 1276541542 byte file or about 15 megs. Then I killed the network connection abnormally to simulate a client getting kicked off their ISP. Then I waited an hour. Then I re-connected and issued an APPE (append) command and uploaded the rest of the file. The final size of the file on the server after the upload finished was 1292326238. i.e. about 15 megs MORE than it should be. Corrupt file. What is correct way for an FTP server to prevent corrupted uploaded files because of late append?
What is correct way for an FTP server to prevent corrupted uploaded files because of late append?
There is no way for the FTP server to prevent corrupted uploaded files because the server does not know what the file should be.
But the server can help the client to do a proper upload by implementing the SIZE command. Using this command the client can determine the current file size at the server and thus the position in the file where the upload should be continued. Of course this logic has to be implemented at the client.
i have pure-ftpd answers about it’s upload-script
i’m running pure-uploadscript --run /home/aa/done.rb —daemonize
and my done.rb program is
#!/usr/bin/env ruby
puts "done"
f=File.open("/home/aa/ddd.txt", "w")
f << "test"
f.close
and when I run pure-ftpd —uploadscript and upload a file, sure enough the done.rb program is run.
(I know it’s run cuz there is a new file called ddd.txt)
BUT when I’m uploading a big file and kill the ftp client in middle of upload done.rb is STILL run. (Yes I deleted ddd.txt first.)
Therefore, the answer to the question is, EVEN pureftpd can't handle this because of the limits of FTP protocol.

Talend - Read file from several FTP SERVER

I have several FTP servers (4 servers), where there are files that are generated by an application.
This application generates the same type of file with the same structure in the 4 servers.
With Talend, I want to when any change to a file in one of the servers I need to recover their data and put in in Active MQ.
What could you suggest ? Because in tFTP I don't have tWaitForFile
Staying within that architectural approach... You could poll the ftp servers to detect a change in a file's updated Timestamp or size .

How to verify if upload is finished in SFTP [duplicate]

This question already has answers here:
How to confirm SFTP file delivery?
(3 answers)
Closed 1 year ago.
I'm uploading the file through Sftp to destination server using bash scripts.
How I can be sure that the file which is uploaded is complete upload in the case sftp will not return anything or network connection could be broken?
I see that I can get the size of the file before uploading to the server and then I can compare it with the existing size for the file on the server.
Perhaps you can mention about other better options?
Thank you.
I think getting the size is a good option.
What I could imagine :
Client side :
- Put the size of the file, and its md5 in a file, like ".fileinfo"
- Send the fileinfo to the server
- Send the (interesting) File to the server
Server side :
- Check periodically files of a folder (with "watch ls" command for example)
- If a ".fileinfo" exists, read it, and check if the size corresponds to an existing file of the same name (without ".filefome"). If the size corresponds, do an "md5sum" of the file, and check if it corresponds. If yes, move your file into your final destination folder, and delete the ".fileinfo" file. If not reiterate.
Many sites for downloading softwares will provide both the software and its checksum.
we can use the same technique to check our uploading file.
upload the file together with its checksum, on the server side compare the file's checksum with uploaded checksum,
if the two don't match, you will know
The file uploaded is corrupted, or
The checksum uploaded is corrupted, or
Both the checksum and file uploaded are corrupted.
Test exit code of sftp. If it returns 0 you can be pretty sure that everything is ok (assuming you are using OpenSSH sftp). This works only when you use -b switch (what I assume you are doing).
SFTP protocol allows checksum calculation, but I suppose you are stuck with OpenSSH (or either or both sides) that does not support this.
To be 100% sure, you can download the file back and compare with original.

Resources