I have webrick based HTTP server running on a windows machine and client on a Linux machine.
I would like to transfer ~2GB file from the my client program (which is not a browser) to the server program.
What all is available in Ruby for this??
Webrick is pure ruby and not great at streaming in large amounts of data like that.
What I use for this is nginx with the upload module. Nginx handles the upload to disk, then can issue a callback to something (say a rails app) with the original upload params and the path to the file that was just uploaded. Then you can rename/move it on disk, add its path to DB, etc.
Related
I'm currently on the process of developing an OTA firmware update of a datalogger's esp32 board via a FTP and an EFS module. Since all of the dataloggers I'm working with are in an area where there's no wifi coverage and can't use traditional OTA.
The idea is to use the AT commands of the sim5320 of the datalogger to transfer the binary file via a FTP server using an EFS module and then transfering the file to my esp32. I'm using an EFS module because I tried transfering the file with filezila client and can't send files bigger than 500 bytes, which made the task not at all pragmatic since my file is around 300kb.
I found this AT command that does the task: "AT+CFTPGETFILE Get a file from FTP server to EFS" but I have little idea as to how to implement it.
Has anyone done this before or knows how to implement it?
I've put the database and files on separate servers (because I've set up 3 servers for our web application and use the load balancer for them) and I use SFTP for filestorage driver.
Because I've used the SFTP driver for Laravel file storage, the SSH connection to the destination server will increase and the SSH port will be block and the files cannot load from the storage server.
What should I do? is there any other solution to load files from another server?
I recommend using a cloud storage for this usecase, in which one can set the CORS settings. You could for instance use aws s3 for this or use digital ocean spaces which has a similiar api to s3.
You basically post your to that cloud server and save the url to your database.
Of course you need to set of the cors settings, to basically say that server x can access specific files on the cloud storage.
I'm trying to set up an ftps server where I can upload files securely to a specific folder. I've set up the server and it will upload files just fine on regular, unsecure FTP. However, after I set up an SSL certificate and try to upload a file in FTP-SSL, it logs into the server just fine, but trying to upload a file causes my FTP client to hang without uploading any data until the connection times out. I've tried this on a couple of different systems both inside and outside my local network with the same result.
I have implemented a webdav directory in PHP using Sabre DAV, for my website (Application Server Webinterface).
For this Website I am writing now an TCP Socket using C#, which is running on another server (actualy it is in the same datacenter, but for theoretical sake, it is on the other hemisphere).
The Socket actualy is a service, which can start and stop applications (gameserver in this case). I also have implemented a FTP Service in this socket too (for data transfer).
My Goal:
I want to connect my Web Dav to the FTP-Server of my socket, which means File Listening, Download, Upload. The usecase should be, that a user only connect to a single service. Imagine, my socket is running on more then one server.
If i would implement this with my current know how, i would do it this way:
User Request Web Dav Directory
Server make a file listening of the FTP Server
The file listening is added dynamicly to the Web Dav Directory
Now the user open the directory, and want to download the file:
Web Dav Server request the file from the Ftp server
Web Dav Server provide the downloaded file
Web Dav Server delete the provided file
On the other direction, the WebDav Server will accept a file, and upload it then to the FTP Server.
If the servers are not in the same datacenter, this cost traffic. Anyway, i think it takes some time, if the data are binrary instead of textbased configs. Also, the client side progress bar will not notice, if the download to the webdav server / upload to the ftp server is processed (the user possible think nothing happens).
I hope i have successful communicated, where my problem is.
So how can I implement this, without delegate an upload/download from one server to another? Is this even possible?
Bonus: Would a solution like WebDav to Webdav or FTP to FTP provide a better way of implementing it?
Easy way to achieve this is to have a third party software like webdrive to map the ftp server contents to a drive letter. Then point the webdav server to this drive. Windows also provides option to map a webdav/ftp URL as a drive letter so that the application can access it as if its a local drive.
As far as I know, cloudbees is using nginx servers for routing the requests.
I am wondering if there is a way to drop my static files on nginx to make it serves my static files and save some load on my application server.
M.
On runtime the file system is "ephermeral". You have more info about it here.
You should use an external service like Amazon S3. The Amazon S3 ClickStart explains how to use this service from CloudBees.
You can also use other services like Dropbox or Google Drive.