I have taken apple's simple FTP Sample and edited it to my needs. (did not change the uploading part)
It reports that the upload went fine, and the file is indeed on the FTP but if i download the file and open it (jpg images mainly) i get the error message that the file is corrupted.
the only thing i have changed is that when the transfer is finished the connection is forced to close, instad of remaining alive waiting for other uploads.
i think that the program is assigining the last chunk of data to the upload stream and once that is done it assumes that it has finished, without wating for the stream to upload that last chunk.. is that possible? is there a way to see if the networkstream (outputstream) has data in its buffer? i seem to be able to do that only for the inputstream but not for the outpustream.
UPDATE: after comparing the uploaded file with the original file with a hex editor i found out that the files are identical, except that the uploaded one has the final part chopped off. the chopped off part is not allways the same size. it varies between 0 and 256 kb...
UPDATE2:
NSLog(#"ntstrm: %zu", self.networkStream.streamStatus);
is allways returning 2 even while it is uploading. while uploading it should return 4! then after closing it returns 0. but never 4...
UPDATE3:
the only solution i found so far is to put a timer in and wait 15 seconds before i close the connection. but this is not something i want to do because the program i used to upload MANY files and if for each file i ahve to stop 15 seconds its a huge pain.
any help appreciated.
It is not easy to propose a solution given the lack of any fragment of code.
According to your problem description, this can be related to a buffer not flushing properly.
I would suggest you trying to force immediate flush on the output stream. Someone who also wanted to avoid socket delays did that using TCP_NODELAY just setting some options at socket level:
int yes = 1;
setsockopt(CFSocketGetNative(aSocket), IPPROTO_TCP, TCP_NODELAY, (void *)&yes, sizeof(yes));
Hope this helps.
Related
I have 1 running server for handle C-Move, 2 running server for handle C-Store and remote pacs server(GEPACS)
When i tried to C-Move command from remote pacs to C-Store handler, 1 server(py-netdicom) is build and save the file properly and 1 server(go-netdicom) is not.
So there was couple of problems in go-netdicom.
I fixed the code can handle hexadecimals. It originally not supported on go-netdicom.
This was fix almost every problems in my case but still cannot store pixel data properly.
For example, I got 9117252 bytes from original signal from remote pacs and I saved the data itself, but actually it needs to be 18000000 bytes(got an error). even CT images are short for 3 times(got approximately 180000, but need 524288)
I think the problem caused by might be the encapsulation of pixel-data but not sure.
Is there any tip or some help?
Thank you.
EDIT 4: I've got a clue.link here
Somehow C-STORE command have a kind of transfer syntax.
This offer to scp type(compressed or not) of data scu get.
But still I don't have a idea which part of go-netdicom has to be changed.
I'll delete "python" tag because this is not related with python anymore.
I found the solution.
Somehow, GEPACS send the certain transfer syntax for JPEG compression.
if go-netdicom doesn't have the TransferSyntaxUID then pick the GEPACS's first transfer syntax and that was for JPEG compression.
i just put bigendian and explicitvr (GEPACS default) when transfersyntax is empty.
which placed in contextmanager.go line 101 on AssociateRequest, line 127
Hope this result help someone.
Thank you
The problem here is that go-netdicom uses the first PresentationContext sent in the A_ASSOCIATE_RQ (As you can see in the last image). So it accepts "2.16.840.1.113709.1.2.2" which is a privative Transfer Syntax and it is not in the DICOM standart, so no one can manage the C-STORE at the end.
If you are reading this.. maybe you do not use go-netdicom but the problem could be the same if the error involves the transfer Syntax "2.16.840.1.113709.1.2.2", in the Centricity PACs documentation says: "It is expected that other vendors' applications will ignore all Presentation Context proposed with the GE Private Compress Express Transfer Syntax"
And that is what we are suppose to do. I see a list of PRs in go-netdicom so I suppose it is not mantained, so I will post the change for go-netdicom here. I made this changes in contextmanager.go and works like a charm:
A tool I'm writing is responsible for downloading thousands of image files over a matter of many hours. Originally, using TIdHTTP, I would Get the file(s) into a TMemoryStream, and then save that to a file, so long as there were no exceptions. In order to improve speed, I changed the TMemoryStream to a TFileStream.
However, now if the resource was not found, or otherwise any sort of exception which results in no actual file, it still saves an empty file.
Completely understandable, since I simply create a file stream just prior to the download...
FileStream:= TFileStream.Create(FileName, fmCreate);
try
Web.Get(AURL, FileStream);
finally
FileStream.Free;
end;
I know I could simply delete the file if there was an exception. But it seems far too sloppy. I'm sure there's a more appropriate method of aborting such a situation.
How should I make this to not save a file if there was an exception, while not altering the performance (if at all possible)?
How should I make this to not save a file if there was an exception, while not altering the performance (if at all possible)?
This isn't possible in general. Errors and failures can happen at any step if the way, including part way through the download. Once this point is understood, then you must accept that the file can be partially downloaded and then abandoned. At which point where do you store it?
The obvious choices are memory and file. You don't want to store to memory, which leaves to file.
This takes you back to your current solution.
I know I could simply delete the file if there was an exception.
This is the correct approach. There are a few variants on this. For instance you might download to a temporary file that is created with flags to arrange its deletion when closed. Only if the download completes do you then copy to the true destination. This is the approach that a browser takes. But the basic idea is to download to file and deal with any failure by tidying up.
Instead of downloading the entire image in one go, you could consider using HTTP range requests if the server supports it. Then you could chunk the file into smaller parts, requesting the next part after the first finishes (or even requesting multiple parts at the same time to increase performance). If there is an exception then you can about the future requests, so they never start in the first place.
YouTube and a number of streaming media sites started doing this a while ago. It used to be if you started playing a video, then paused it, then it would eventually cache the entire video. Now it only caches a little ahead of the current position. This saves a ton of bandwidth because of the abandon rate for videos.
You could write the partial file to disk or keep it in memory.
I was trying to build a system (NodeJS + Express 4) that reads a user uploaded text file, process it, and feed it back to the user. I was trying to use ajax upload, and multer as the parser for multi-part data. The whole workflow is supposedly to be like this:
User chooses a local file, and clicks the upload button.
Server received file, and read it.
Server do some processing with the data
Send results back
Every part of the link works except the server read part - sometimes the file is not read fully even though the server signals that the file upload was completed (I have tried multiple libraries, like multer, busboy, formidable that triggers the file upload complete event). I have done various experiments, and here's what I find (with 1000 lines of file):
the fs.readFile sometimes ends prematurely. The result file can be anywhere between 100 - 1000 lines.
missing part is almost always the last small piece, feels like the pipe was not fully flushed yet. I have tried file size between 1000 lines to 200,000 lines, and it's always missing the last few hundred lines.
using streaming almost solved the issue - like createReadStream, or byline, line-by-line, but sometimes the result can be 'undefined' or missing the last few lines, but a lot less frequent.
trigger the read twice, and the second time is almost guaranteed to read the full 1000 lines.
Is there anyway to force NodeJS to 'flush' the uploaded file? Somehow I feel the upload complete event was triggered (regardless of library, and everyone is dependent on FileSystem I guess) before the last piece of file was flushed in the stream. Or maybe there are some other issues - reading a static files always give the correct results. I could use the http POST forms but I'd like to use ajax to improve user experience.
Any thoughts?
I'm currently trying to upload some files via zmodem to a small system with an embedded linux with busybox. While most files takes a long time through the 9600 BAUD connection, there is one file that always fails (cramfs_cmc-pu2_v2.45.img). With about 4MB it is also the largest one. For the upload I use Le Putty, a Putty fork that supports zmodem. Unfortunately there is no other method to upload files as the ftp server on that machine does not work properly.
The problem is that the upload always ends up with this strange stuff (after some hours of no feedback at all):
# /usr/bin/rz
Sending: cramfs_cmc-pu2_v2.45.img23be50
Bytes Sent: 0/4132864 BPS:0 ETA 00:00
®B#id##íÁ##htCJÁ®B#killíÁ##htCJ®B#killall#íÁ##htCJÁ®B#ln##íÁ##htCJ®B
#logger##íÁ##<H#Jº!#login###íÁ##htCJÁ®B#ls##íÁ##htCJ®B#md5sum##íÁ##¿
##JCø##mgfestart###íÁ##htCJ®B#mkdir###íÁ##htCJ®B#mknod###íÁ##htCJkH>
F¾#
I guessed that it runs out of flash memory but df gives me just
df: /proc/mounts: No such file or directory
Calculation of free space is difficult in that case anyway as the filesystem is jffs2.
Maybe there is anyone with an idea how to solve this problem with that ancient protocol. Thanks in advance.
Edit: Meanwhile I've splitted the file in many smaller ones and tried to upload them. It always fails after two files. This supports the suspicion that there is not enough free space.
Quite simple approach to check how much space there is left, even if you have no "df":
I just copied an existing file several times and the result was: "No space left on the device". So I'm pretty sure that the strange behaviour described above happened because of this.
I'm trying to download a small zip file (1159 bytes) and pipe it through funzip. This works great with larger files fro that server. However three small files give me an error:
Broken pipe, closing control connection.
I use the following code:
wget -O - --ftp-user=username --ftp-password=secret ftp://server/small-file.zip | funzip
Also downloading the file directly works good, only the piping to funzip doesn't work. I suspect the file is too small.
Anyone knows how to fix this?
Edit: Size doesn't seem to matter (don't let the girls tell you otherwise :)), even files of 400 bytes are not giving errors
Ok, if nobody can answer it, I'll answer it myself
I found there are two solutions, one is limiting the download rate for wget
--limit-rate=1000
This works for the files of around 1kb but now sometimes larger files seem to suffer from the same error. It also slows down the whole process.
Now I just pipe the download through a script that sleeps 1 second at the end. This seems to solve it.