I'm currently trying to upload some files via zmodem to a small system with an embedded linux with busybox. While most files takes a long time through the 9600 BAUD connection, there is one file that always fails (cramfs_cmc-pu2_v2.45.img). With about 4MB it is also the largest one. For the upload I use Le Putty, a Putty fork that supports zmodem. Unfortunately there is no other method to upload files as the ftp server on that machine does not work properly.
The problem is that the upload always ends up with this strange stuff (after some hours of no feedback at all):
# /usr/bin/rz
Sending: cramfs_cmc-pu2_v2.45.img23be50
Bytes Sent: 0/4132864 BPS:0 ETA 00:00
®B#id##íÁ##htCJÁ®B#killíÁ##htCJ®B#killall#íÁ##htCJÁ®B#ln##íÁ##htCJ®B
#logger##íÁ##<H#Jº!#login###íÁ##htCJÁ®B#ls##íÁ##htCJ®B#md5sum##íÁ##¿
##JCø##mgfestart###íÁ##htCJ®B#mkdir###íÁ##htCJ®B#mknod###íÁ##htCJkH>
F¾#
I guessed that it runs out of flash memory but df gives me just
df: /proc/mounts: No such file or directory
Calculation of free space is difficult in that case anyway as the filesystem is jffs2.
Maybe there is anyone with an idea how to solve this problem with that ancient protocol. Thanks in advance.
Edit: Meanwhile I've splitted the file in many smaller ones and tried to upload them. It always fails after two files. This supports the suspicion that there is not enough free space.
Quite simple approach to check how much space there is left, even if you have no "df":
I just copied an existing file several times and the result was: "No space left on the device". So I'm pretty sure that the strange behaviour described above happened because of this.
Related
I have 1 running server for handle C-Move, 2 running server for handle C-Store and remote pacs server(GEPACS)
When i tried to C-Move command from remote pacs to C-Store handler, 1 server(py-netdicom) is build and save the file properly and 1 server(go-netdicom) is not.
So there was couple of problems in go-netdicom.
I fixed the code can handle hexadecimals. It originally not supported on go-netdicom.
This was fix almost every problems in my case but still cannot store pixel data properly.
For example, I got 9117252 bytes from original signal from remote pacs and I saved the data itself, but actually it needs to be 18000000 bytes(got an error). even CT images are short for 3 times(got approximately 180000, but need 524288)
I think the problem caused by might be the encapsulation of pixel-data but not sure.
Is there any tip or some help?
Thank you.
EDIT 4: I've got a clue.link here
Somehow C-STORE command have a kind of transfer syntax.
This offer to scp type(compressed or not) of data scu get.
But still I don't have a idea which part of go-netdicom has to be changed.
I'll delete "python" tag because this is not related with python anymore.
I found the solution.
Somehow, GEPACS send the certain transfer syntax for JPEG compression.
if go-netdicom doesn't have the TransferSyntaxUID then pick the GEPACS's first transfer syntax and that was for JPEG compression.
i just put bigendian and explicitvr (GEPACS default) when transfersyntax is empty.
which placed in contextmanager.go line 101 on AssociateRequest, line 127
Hope this result help someone.
Thank you
The problem here is that go-netdicom uses the first PresentationContext sent in the A_ASSOCIATE_RQ (As you can see in the last image). So it accepts "2.16.840.1.113709.1.2.2" which is a privative Transfer Syntax and it is not in the DICOM standart, so no one can manage the C-STORE at the end.
If you are reading this.. maybe you do not use go-netdicom but the problem could be the same if the error involves the transfer Syntax "2.16.840.1.113709.1.2.2", in the Centricity PACs documentation says: "It is expected that other vendors' applications will ignore all Presentation Context proposed with the GE Private Compress Express Transfer Syntax"
And that is what we are suppose to do. I see a list of PRs in go-netdicom so I suppose it is not mantained, so I will post the change for go-netdicom here. I made this changes in contextmanager.go and works like a charm:
I have taken apple's simple FTP Sample and edited it to my needs. (did not change the uploading part)
It reports that the upload went fine, and the file is indeed on the FTP but if i download the file and open it (jpg images mainly) i get the error message that the file is corrupted.
the only thing i have changed is that when the transfer is finished the connection is forced to close, instad of remaining alive waiting for other uploads.
i think that the program is assigining the last chunk of data to the upload stream and once that is done it assumes that it has finished, without wating for the stream to upload that last chunk.. is that possible? is there a way to see if the networkstream (outputstream) has data in its buffer? i seem to be able to do that only for the inputstream but not for the outpustream.
UPDATE: after comparing the uploaded file with the original file with a hex editor i found out that the files are identical, except that the uploaded one has the final part chopped off. the chopped off part is not allways the same size. it varies between 0 and 256 kb...
UPDATE2:
NSLog(#"ntstrm: %zu", self.networkStream.streamStatus);
is allways returning 2 even while it is uploading. while uploading it should return 4! then after closing it returns 0. but never 4...
UPDATE3:
the only solution i found so far is to put a timer in and wait 15 seconds before i close the connection. but this is not something i want to do because the program i used to upload MANY files and if for each file i ahve to stop 15 seconds its a huge pain.
any help appreciated.
It is not easy to propose a solution given the lack of any fragment of code.
According to your problem description, this can be related to a buffer not flushing properly.
I would suggest you trying to force immediate flush on the output stream. Someone who also wanted to avoid socket delays did that using TCP_NODELAY just setting some options at socket level:
int yes = 1;
setsockopt(CFSocketGetNative(aSocket), IPPROTO_TCP, TCP_NODELAY, (void *)&yes, sizeof(yes));
Hope this helps.
I'm trying to download a small zip file (1159 bytes) and pipe it through funzip. This works great with larger files fro that server. However three small files give me an error:
Broken pipe, closing control connection.
I use the following code:
wget -O - --ftp-user=username --ftp-password=secret ftp://server/small-file.zip | funzip
Also downloading the file directly works good, only the piping to funzip doesn't work. I suspect the file is too small.
Anyone knows how to fix this?
Edit: Size doesn't seem to matter (don't let the girls tell you otherwise :)), even files of 400 bytes are not giving errors
Ok, if nobody can answer it, I'll answer it myself
I found there are two solutions, one is limiting the download rate for wget
--limit-rate=1000
This works for the files of around 1kb but now sometimes larger files seem to suffer from the same error. It also slows down the whole process.
Now I just pipe the download through a script that sleeps 1 second at the end. This seems to solve it.
To restrict the scope, let assume we are in Windows world only.
Also assume we don't want to play with permission policy.
Is it possible for us to create a file that cannot be copied?
Thank you in advance.
"Trying to make digital files uncopyable is like trying to make water not wet." ~ Bruce Schneier
No. You can't create a file that a SYSADMIN can't copy. You could encrypt it, though.
Well, how about creating a file that uses up more than 50% of the total space on that machine and that is not compressible?
For instance, let us assume that you want to save a boolean (true or false) in such a fashion.
Depending on its value, you could then write a bit stream of ones or zeroes and encrypt said stream using some kind of encryption algorith, such as AES in CBC mode. This gives you the added advantage of error correction. Even in case of massive data corruption, you should be able to recover your boolean by checking whether ones or zeroes are prevalent in the decrypted stream.
In that case you cannot copy it around (completely) on the machine...
Of course, any type of external memory that can be added to the system would pose a problem in this scenario. But the file would be already encrypted, so don't worry about it too much...
Any file that can be read can have its contents written to another location (such as another file, i.e. copied).
The only thing you can do is limit who/what can read the file.
What is the motivation behind? If it is a read-only file, you can have it as embedded resources within your assembly.
Nice try, RIAA.
But seriously, no you can not. It is always possible to copy, you can just make it it more difficult for people to make sense of the file or try to hide it using like encryption. Spotify does it.
If you really try hard thou, you cold make a root-kit for windows and use it to prevent windows from even knowing about the file and also prevent copies. The file will still be there and copy-able by other tools, or Linux accessing the ntfs.
If in a running process you open a file and hold an exclusive lock, then other processes cannot read the file until you close the handle or your process terminates. However, as admin you could forcibly remove the lock handle.
Short answer: No.
You can, of course, use security settings to limit who can read the file. But if someone can read it, then they can copy it. Even if you found some operating system trick to disable "ordinary" copying, if someone can read the file, they can extract the contents, store it in memory, and then write it somewhere else.
You can encrypt the contents so it's only useful to your own program, that knows how to decrypt it.
That's about it.
When using Windows 7 to copy some files from a hard drive, certain files popped up a message saying they could not be copied in their entirety; certain data would be omitted from the copy. I suspect that had something to do with slack space at the end of the files, though I thought the message was curious. I would have expected the copy operation to just ignore the slack space.
If you are running old (OLD) versions of windows, there are certain characters you can put in the filename that make it invalid, not listed in folders, etc. They were used a lot in the old pub ftp days of filesharing ;)
In the old DOS days, you used to be able to flag disk sectors as bad and still read from them. This meant the OS ignored the sector in question but your application would know where to look and be able to get the data. Not sure this would work these days.
Another old MS-DOS trick was to put a space character in the middle of the filename (yes, spaces were valid characters for filenames). Since there was no method on the command line to escape a space, the file couldn't be copied using the DOS commands.
This answer is outside Windows so yeah
Dont know if its already been said but what about a file that is an inseperable part of the firmware so that it is always on AND running, perhaps it has firmware that generates a sequence that is required for the other . AN incedental effect of its running is to prevent any 80% or more of its code from being replicated. Lets say its on an entirely different board, protected by surge protectors, heavy em proof shielding and anything else required to make it completely unerasable.
If its possible to make a program that is ALWAYS on and running as long as the copying software is running then yes.
I have another way and this IS with windows. I will come to your house and give you a disk, i will then proceed to destroy every single computer you put the disk into. This doesnt work on XP
Well technically you could create and write to a write-only network share.
I am looking for a robust way to copy files over a Windows network share that is tolerant of intermittent connectivity. The application is often used on wireless, mobile workstations in large hospitals, and I'm assuming connectivity can be lost either momentarily or for several minutes at a time. The files involved are typically about 200KB - 500KB in size. The application is written in VB6 (ugh), but we frequently end up using Windows DLL calls.
Thanks!
I've used Robocopy for this with excellent results. By default, it will retry every 30 seconds until the file gets across.
I'm unclear as to what your actual problem is, so I'll throw out a few thoughts.
Do you want restartable copies (with such small file sizes, that doesn't seem like it'd be that big of a deal)? If so, look at CopyFileEx with COPYFILERESTARTABLE
Do you want verifiable copies? Sounds like you already have that by verifying hashes.
Do you want better performance? It's going to be tough, as it sounds like you can't run anything on the server. Otherwise, TransmitFile may help.
Do you just want a fire and forget operation? I suppose shelling out to robocopy, or TeraCopy or something would work - but it seems a bit hacky to me.
Do you want to know when the network comes back? IsNetworkAlive has your answer.
Based on what I know so far, I think the following pseudo-code would be my approach:
sourceFile = Compress("*.*");
destFile = "X:\files.zip";
int copyFlags = COPYFILEFAILIFEXISTS | COPYFILERESTARTABLE;
while (CopyFileEx(sourceFile, destFile, null, null, false, copyFlags) == 0) {
do {
// optionally, increment a failed counter to break out at some point
Sleep(1000);
while (!IsNetworkAlive(NETWORKALIVELAN));
}
Compressing the files first saves you the tracking of which files you've successfully copied, and which you need to restart. It should also make the copy go faster (smaller total file size, and larger single file size), at the expense of some CPU power on both sides. A simple batch file can decompress it on the server side.
Try using BITS (Background Intelligent Transfer Service). It's the infrastructure that Windows Update uses, is accessible via the Win32 API, and is built specifically to address this.
It's usually used for application updates, but should work well in any file moving situation.
http://www.codeproject.com/KB/IP/bitsman.aspx
I agree with Robocopy as a solution...thats why the utility is called "Robust File Copy"
I've used Robocopy for this with excellent results. By default, it will retry every 30 seconds until the file gets across.
And by default, a million retries. That should be plenty for your intermittent connection.
It also does restartable transfers and you can even throttle transfers with a gap between packets assuing you don't want to use all the bandwidth as other programs are using the same connection (/IPG switch)?.
How about simply sending a hash after or before you send the file, and comparing that with the file you received? That should at least make sure you have a correct file.
If you want to go all out you could do the same process, but for small parts of the file. Then when you have all pieces, join them on the receiving end.
You could use Microsoft SyncToy (free).
http://www.microsoft.com/Downloads/details.aspx?familyid=C26EFA36-98E0-4EE9-A7C5-98D0592D8C52&displaylang=en
Hm, seems rsync does it, and does not need server/daemon/install I thought it does - just $ rsync src dst.
SMS if it's available works.