File lock issues - Syncing through SCP client - file-locking

I'm trying to access files from an iPad. It supposes to plot some live data. However the data (file) that is being generated by the java program doesn't let the SCP client to sync the file to the server. It can access only on stopping the java program so that it can let go off the file that it writes. Is there way out for this? Like real time update of the server when the file is generated by the program?

A possible solution would be to run a command line version of scp as a system command from your java program (Sample code: http://www.java-samples.com/showtutorial.php?tutorialid=8) each time you have closed the filehandle.
commandline scp: http://www.jfitz.com/tips/ssh_for_windows_doc_version2.html#CommandLineSSHSCP

Related

How to place a file directly in HDFS without using local by directly download a file from a webpage?

I need some help. I am downloading a file from a webpage using python code and placing it in local file system and then transferring it into HDFS using put command and then performing operations on it.
But there might be some situations where the file size will be very large and downloading into Local File System is not a right procedure. So I want the file to be directly be downloaded into HDFS with out using the local file system at all.
Can any one suggest me some methods which one would be the best method to proceed?
If there are any errors in my question please correct me.
You can pipe it directly from a download to avoid writing it to disk, e.g.:
curl server.com/my/file | hdfs dfs -put - destination/file
The - parameter to -put tells it to read from stdin (see the documentation).
This will still route the download through your local machine, though, just not through your local file system. If you want to download the file without using your local machine at all, you can write a map-only MapReduce job whose tasks accept e.g. an input file containing a list of files to be downloaded and then download them and stream out the results. Note that this will require your cluster to have open access to the internet which is generally not desirable.

FTP: How to know if a file on remote ftp server is complete [duplicate]

This question already has answers here:
How to detect that a file is being uploaded over FTP
(4 answers)
Closed 4 years ago.
A customer gave us ftp-access to download PDF files.
Unfortunately I don't know if the file on the remote is ready to download.
This command line works:
ncftpget -u user -p pwd foo.example.com import created/*pdf
But I am a afraid that the files are not complete. I don't want to download files which are not completely create on the remote site.
Client and server run on linux. File-Locking is not available.
Just for the records. We switch from ftp to http. Up to now we used ftp, but now we use a simple tool to upload files via http: tbzuploader
Check the size of the file for every five seconds.If the size varies for consecutive time.Then it is partial.If not then file is complete.
I use ftputil to implement this work-around:
connect to ftp server
list all files of the directory
call stat() on each file
wait N seconds
For each file: call stat() again. If result is different, then skip this file, since it was modified during the last seconds.
If stat() result is not different, then download the file.
This whole ftp-fetching is old and obsolete technology. I hope that the customer will use a modern http API the next time :-)

FTP - automatically rename/move file on FTP server when it's downloaded

I need to make a script for an FTP server that will detect when a file is downloaded and then either rename or move or delete that file to prevent it from being re-downloaded. Is there a way to do this by storing a file on the FTP server that will always be looking for when a file is downloaded, and then executing that process? I assume it could be done with a bash script, but I dont know enough about them to know if it can be constantly running/checking for if a file is downloaded.
Thanks!

Files created but, No data getting uploaded via using FTP in shell

I am using command
ftp -n -s:C:\FTP_cmd.txt ftp.madrecha.com
the FTP_cmd.txt file contains
user
myName#domain.com
Pa$$Word
Put C:\AccessDocumentation.pptx
quit
The file is getting created on server. but, size is 0 bytes. No data in the file. I tried using FileZilla to upload same file using same user. That was successful and file was created with 352 KB
Is there issue in the command or this is server side issue?
PS: I tried running using cmd (on windows) and also on Powershell (on windows). But resulted in same issue.
Thanks in advance.
UPDATE: Attaching screenshot of the command run.
I don't have the reputation to comment at the moment, so I'm writing my guesses as an answer.
I think the "put" command has to be lowercase.
Additionally you should check the file permissions, you may have write access to the FTP server but no right to read from the file you want to copy to the server.

Bash script to upload first RAR part to ftp, before completion of second part (To Save Time and space)

i was making a bash script for my server which pack some directories with RAR and upload it to other ftp server, so some folders are big and i have to rar them in parts and have to wait for all parts to be rared before uploading them, which consumes lots of time and space
so i want to do it more fast like, upload every rared part on its completion and delete it afterwards automatically without waiting for another all parts, i know there are possibilities of getting data corrupted or else, but that's what i want to do and i confused about where to start the script.
Server OS: Ubuntu 9.10
thanks
Kevin
Start rar in the background
Check wheter rar is finished on the part by calling fuser foo.part2.rar.
If fuser does not return anything, you can transfer it to the remote FTP.

Resources