How can i CSTORE pixel data of dicom from CMOVE properly? - go

I have 1 running server for handle C-Move, 2 running server for handle C-Store and remote pacs server(GEPACS)
When i tried to C-Move command from remote pacs to C-Store handler, 1 server(py-netdicom) is build and save the file properly and 1 server(go-netdicom) is not.
So there was couple of problems in go-netdicom.
I fixed the code can handle hexadecimals. It originally not supported on go-netdicom.
This was fix almost every problems in my case but still cannot store pixel data properly.
For example, I got 9117252 bytes from original signal from remote pacs and I saved the data itself, but actually it needs to be 18000000 bytes(got an error). even CT images are short for 3 times(got approximately 180000, but need 524288)
I think the problem caused by might be the encapsulation of pixel-data but not sure.
Is there any tip or some help?
Thank you.
EDIT 4: I've got a clue.link here
Somehow C-STORE command have a kind of transfer syntax.
This offer to scp type(compressed or not) of data scu get.
But still I don't have a idea which part of go-netdicom has to be changed.
I'll delete "python" tag because this is not related with python anymore.

I found the solution.
Somehow, GEPACS send the certain transfer syntax for JPEG compression.
if go-netdicom doesn't have the TransferSyntaxUID then pick the GEPACS's first transfer syntax and that was for JPEG compression.
i just put bigendian and explicitvr (GEPACS default) when transfersyntax is empty.
which placed in contextmanager.go line 101 on AssociateRequest, line 127
Hope this result help someone.
Thank you

The problem here is that go-netdicom uses the first PresentationContext sent in the A_ASSOCIATE_RQ (As you can see in the last image). So it accepts "2.16.840.1.113709.1.2.2" which is a privative Transfer Syntax and it is not in the DICOM standart, so no one can manage the C-STORE at the end.
If you are reading this.. maybe you do not use go-netdicom but the problem could be the same if the error involves the transfer Syntax "2.16.840.1.113709.1.2.2", in the Centricity PACs documentation says: "It is expected that other vendors' applications will ignore all Presentation Context proposed with the GE Private Compress Express Transfer Syntax"
And that is what we are suppose to do. I see a list of PRs in go-netdicom so I suppose it is not mantained, so I will post the change for go-netdicom here. I made this changes in contextmanager.go and works like a charm:

Related

zmodem upload ends up with strange error

I'm currently trying to upload some files via zmodem to a small system with an embedded linux with busybox. While most files takes a long time through the 9600 BAUD connection, there is one file that always fails (cramfs_cmc-pu2_v2.45.img). With about 4MB it is also the largest one. For the upload I use Le Putty, a Putty fork that supports zmodem. Unfortunately there is no other method to upload files as the ftp server on that machine does not work properly.
The problem is that the upload always ends up with this strange stuff (after some hours of no feedback at all):
# /usr/bin/rz
Sending: cramfs_cmc-pu2_v2.45.img23be50
Bytes Sent: 0/4132864 BPS:0 ETA 00:00
®B#id##íÁ##htCJÁ®B#killíÁ##htCJ®B#killall#íÁ##htCJÁ®B#ln##íÁ##htCJ®B
#logger##íÁ##<H#Jº!#login###íÁ##htCJÁ®B#ls##íÁ##htCJ®B#md5sum##íÁ##¿
##JCø##mgfestart###íÁ##htCJ®B#mkdir###íÁ##htCJ®B#mknod###íÁ##htCJkH>
F¾#
I guessed that it runs out of flash memory but df gives me just
df: /proc/mounts: No such file or directory
Calculation of free space is difficult in that case anyway as the filesystem is jffs2.
Maybe there is anyone with an idea how to solve this problem with that ancient protocol. Thanks in advance.
Edit: Meanwhile I've splitted the file in many smaller ones and tried to upload them. It always fails after two files. This supports the suspicion that there is not enough free space.
Quite simple approach to check how much space there is left, even if you have no "df":
I just copied an existing file several times and the result was: "No space left on the device". So I'm pretty sure that the strange behaviour described above happened because of this.

Broken pipe, closing control connection. while piping small file through funzip using wget

I'm trying to download a small zip file (1159 bytes) and pipe it through funzip. This works great with larger files fro that server. However three small files give me an error:
Broken pipe, closing control connection.
I use the following code:
wget -O - --ftp-user=username --ftp-password=secret ftp://server/small-file.zip | funzip
Also downloading the file directly works good, only the piping to funzip doesn't work. I suspect the file is too small.
Anyone knows how to fix this?
Edit: Size doesn't seem to matter (don't let the girls tell you otherwise :)), even files of 400 bytes are not giving errors
Ok, if nobody can answer it, I'll answer it myself
I found there are two solutions, one is limiting the download rate for wget
--limit-rate=1000
This works for the files of around 1kb but now sometimes larger files seem to suffer from the same error. It also slows down the whole process.
Now I just pipe the download through a script that sleeps 1 second at the end. This seems to solve it.

Is it possible to create a file that cannot be copied?

To restrict the scope, let assume we are in Windows world only.
Also assume we don't want to play with permission policy.
Is it possible for us to create a file that cannot be copied?
Thank you in advance.
"Trying to make digital files uncopyable is like trying to make water not wet." ~ Bruce Schneier
No. You can't create a file that a SYSADMIN can't copy. You could encrypt it, though.
Well, how about creating a file that uses up more than 50% of the total space on that machine and that is not compressible?
For instance, let us assume that you want to save a boolean (true or false) in such a fashion.
Depending on its value, you could then write a bit stream of ones or zeroes and encrypt said stream using some kind of encryption algorith, such as AES in CBC mode. This gives you the added advantage of error correction. Even in case of massive data corruption, you should be able to recover your boolean by checking whether ones or zeroes are prevalent in the decrypted stream.
In that case you cannot copy it around (completely) on the machine...
Of course, any type of external memory that can be added to the system would pose a problem in this scenario. But the file would be already encrypted, so don't worry about it too much...
Any file that can be read can have its contents written to another location (such as another file, i.e. copied).
The only thing you can do is limit who/what can read the file.
What is the motivation behind? If it is a read-only file, you can have it as embedded resources within your assembly.
Nice try, RIAA.
But seriously, no you can not. It is always possible to copy, you can just make it it more difficult for people to make sense of the file or try to hide it using like encryption. Spotify does it.
If you really try hard thou, you cold make a root-kit for windows and use it to prevent windows from even knowing about the file and also prevent copies. The file will still be there and copy-able by other tools, or Linux accessing the ntfs.
If in a running process you open a file and hold an exclusive lock, then other processes cannot read the file until you close the handle or your process terminates. However, as admin you could forcibly remove the lock handle.
Short answer: No.
You can, of course, use security settings to limit who can read the file. But if someone can read it, then they can copy it. Even if you found some operating system trick to disable "ordinary" copying, if someone can read the file, they can extract the contents, store it in memory, and then write it somewhere else.
You can encrypt the contents so it's only useful to your own program, that knows how to decrypt it.
That's about it.
When using Windows 7 to copy some files from a hard drive, certain files popped up a message saying they could not be copied in their entirety; certain data would be omitted from the copy. I suspect that had something to do with slack space at the end of the files, though I thought the message was curious. I would have expected the copy operation to just ignore the slack space.
If you are running old (OLD) versions of windows, there are certain characters you can put in the filename that make it invalid, not listed in folders, etc. They were used a lot in the old pub ftp days of filesharing ;)
In the old DOS days, you used to be able to flag disk sectors as bad and still read from them. This meant the OS ignored the sector in question but your application would know where to look and be able to get the data. Not sure this would work these days.
Another old MS-DOS trick was to put a space character in the middle of the filename (yes, spaces were valid characters for filenames). Since there was no method on the command line to escape a space, the file couldn't be copied using the DOS commands.
This answer is outside Windows so yeah
Dont know if its already been said but what about a file that is an inseperable part of the firmware so that it is always on AND running, perhaps it has firmware that generates a sequence that is required for the other . AN incedental effect of its running is to prevent any 80% or more of its code from being replicated. Lets say its on an entirely different board, protected by surge protectors, heavy em proof shielding and anything else required to make it completely unerasable.
If its possible to make a program that is ALWAYS on and running as long as the copying software is running then yes.
I have another way and this IS with windows. I will come to your house and give you a disk, i will then proceed to destroy every single computer you put the disk into. This doesnt work on XP
Well technically you could create and write to a write-only network share.

Verify whether ftp is complete or not?

I got an application which is polling on a folder continuously. Once any file is ftp to the folder, the application has to move this file to some other folder for processing.
Here, we don't have any option to verify whether ftp is complete or not.
One command "lsof" is suggested in the technical forums. It got a file description column which gives the file status.
Since, this is a free bsd command and not present in old versions of linux, I want to clarify the usage of this command.
Can you guys tell us your experience in file verification and is there any other alternative solution available?
Also, is there any risk in using this utility?
Appreciate your help in advance.
Thanks,
Mathew Liju
We've done this before in a number of different ways.
Method one:
If you can control the process sending the files, have it send the file itself followed by a sentinel file. For example, send the real file "contracts.doc" followed by a one-byte "contracts.doc.sentinel".
Then have your listener process watch out for the sentinel files. When one of them is created, you should process the equivalent data file, then delete both.
Any data file that's more than a day old and doesn't have a corresponding sentinel file, get rid of it - it was a failed transmission.
Method two:
Keep an eye on the files themselves (specifically the last modification date/time). Only process files whose modification time is more than N minutes in the past. That increases the latency of processing the files but you can usually be certain that, if a file hasn't been written to in five minutes (for example), it's done.
Conclusion:
Both those methods have been used by us successfully in the past. I prefer the first but we had to use the second one once when we were not allowed to change the process sending the files.
The advantage of the first one is that you know the file is ready when the sentinel file appears. With both lsof (I'm assuming you're treating files that aren't open by any process as ready for processing) and the timestamps, it's possible that the FTP crashed in the middle and you may be processing half a file.
There are normally three approaches to this sort of problem.
providing a signal file so that when your file is transferred, an additional file is sent to mark that transfer is complete
add an entry to a log file within that directory to indicate a transfer is complete (this really only works if you have a single peer updating the directory, to avoid concurrency issues)
parsing the file to determine completeness. e.g. does the file start with a length field, or is it obviously incomplete ? e.g. parsing an incomplete XML file will result in a parse error due to the lack of an end element. Depending on your file's size and format, this can be trivial, or can be very time-consuming.
lsof would possibly be an option, although you've identified your Linux portability issue. If you use this, note the -F option, which formats the output suitable for processing by other programs, rather than being human-readable.
EDIT: Pax identified a fourth (!) method I'd forgotten - using the fact that the timestamp of the file hasn't updated in some time.
There is a fifth method. You can also check if the FTP Session is still active. This will work if every peer has it's own ftp user account. As long as the user is not logged off from FTP, assume the files are not complete.

Windows 7 file problem

I am using VB6 SP6
This code has work correctly for years but I am now having a problem on a WIN7 to WIN7 network. It also works correctly on an XP to Win7 network.
Open file for random as ChannelNum LEN =90
'the file is on the other computer on the network
RecNum = (LOF(ChannelNum) \ 90) + 2
Put ChannelNum, RecNum, MyAcFile
'(MyAcFile is UDT that is less than 90 long)
.......... other code that does not reference file or RecNum - then
RecNum = (LOF(ChannelNum) \ 90) + 2
Put ChannelNum, RecNum, MyAcFile
Close ChannelNum
The second record overwrites the first.
We had a similar problem in the past with OpportunisticLocking so we turn that off at install - along with some other keys that cause errors in data in Windows networks.
However we have had no problems like this for years, so I think MS have some new "better" option that they think will "improve" networking.
Thanks for your help
I doubt there is any "bug" here except in your approach. The file metadata that LOF() interrogates is not meant to be updated immediately by simple writes. A delay seems like a silly idea, prone to occasional failure unless a very long delay is used and sapping performance at best. Even close/reopen can be iffy: VB6's Close statement is an async operation. That's why the Reset statement exists.
This is also why things like FlushFileBuffers() and SetEndOfFile() exist at the API level. They are also relatively expensive operations from a performance standpoint.
Track your records yourself. Only rely on LOF() if necessary after you first open the file.
Hmmm... is file (as per in the open statement at the top of the code sample) UNC filename or similar to x:\ where x is the mapped drive? Are you not incrementing RecNum? Judging by the code, the RecNum is unchanged and hence appears to overwrite the first record...Sorry for sounding ummm no pun intended... basic...It would be of help to show some more code here...
Hope this helps,
Best regards,
Tom.
It can be just timing issue. In some runs your LOF() function returns more updated information than in other runs. The file system API is asynchronous, for example when some write function is called it will not be immediately reflected as the increazed size.
In short: you code have shown an old bug, which is just easier to reproduce on Windows 7.
To fix the bug the cheapest way: you may decide to add a delay (it can be significant delay of say 5 seconds).
More elaborate fix is to force the size update by closing and reopening file.

Resources