Is it somehow possible to fetch Blu-Ray Disc id and title programmatically on Windows7+ platform?
If you can programmatically open the following files you'll probably get what you need:
/AACS/mcmf.xml - This file is the Managed Copy manifest file and will contain a 'contentID' attribute (in the mcmfManifest tag) that can be used to identify the disc. Typically it is a 32 hexadecimal digit string.
There is sometimes, also an /CERTIFICATE/id.bdmv file which contains a 4 byte disc organization id (at byte offset 40) followed by a 16 byte disc id.
Sometimes, there is metadata information in the /BDMV/META/DL directory in the XML file bdmt_eng.xml (replace eng for other 3 letter language codes for other languages). For example on the supplemetary disc of The Dark Knight I see this file contains:
<di:title><di:name>The Dark Knight Bonus Disc</di:name></di:title>
For .NET, the BDInfo library will parse the relevant disc structure.
Related
I was given a file that seems to be encoded in UTF-8, but every byte that should start with 1 starts with 0.
E.g. in place where one would expect polish letter 'ę', encoded in UTF-8 as \o304\o231, there is \o104\o031. Or, in binary, there is 01000100:00011001 instead of 11000100:10011001.
I assume that this was not done on purpose by evil file creator who enjoys my headache, but rather is a result of some erroneous operations performed on a correct UTF-8 file.
The question is: what "reasonable" operations could be the cause? I have no idea how the file was created, probably it was exported by some unknown software, than could have been compressed, uploaded, copied & pasted, converted to another encoding etc.
I'll be greatful for any idea : )
I have created layout based on cobol copybook.
Layout snap-shot:
I tried to load data also selecting same layout, it gives me wrong result for some columns. I try using all binary numeric type.
CLASS-ORDER-EDGE
DIV-NO-EDG
OFFICE-NO-EDG
REG-AREA-NO-EDG
CITY-NO-EDG
COUNTY-NO-EDG
BILS-COUNT-EDG
REV-AMOUNT-EDG
USAGE-QTY-EDG
GAS-CCF-EDG
result snapshot
Input file can be find below attachment
enter link description here
or
https://drive.google.com/open?id=0B-whK3DXBRIGa0I0aE5SUHdMTDg
Expected output:
Related thread
Unpacking COMP-3 digit using Java
First Problem you have done an EBCDIC --> ascii conversion on the file !!!!
The EBCDIC --> ascii conversion will also try and convert binary fields as well as text.
For example:
Comp-3 value hex hex after Ascii conversion
400 x'400c' x'200c' x'40' is the ebcdic space character
it gets converted to the ascii
space character x'20'
You need to do binary transfer, keeping the file as ebcdic:
Check the file on the Mainframe if it has a RECFM=FB you can do a transfer
If the file is RECFM=VB make sure you transfer the RDW (Record Descriptor word) (or copy the VB file to a FB file on the mainframe).
Other points:
You will have to update RecordEditor/JRecord
The font will need to be ebcdic (cp037 for US ebcdic; for other lookup)
The FileStructure/FileOrganisation needs to change (Fixed length / VB)
Finally
BILS-Count-EDG is either 9 characters long or starts in column 85 (and is 8 bytes long).
You should include Xml in as text not copy a picture in.
In the RecordEditor if you Right click >>> Edit Record; it will show the fields as Value, Raw Text and Hex. That is useful for seeing what is going on
You do not seem to accept many answers; it is not relevant whether the answer solves your problem; it is whether the answer is correct answer for the question.
I have created an ach file which, in a text editor looks exactly like a valid ach file. When I open it in an ACH viewer tool I get an error saying that the first character must be 1. I found this in the NACHA file specs 'Picture: This is the type of bit the ACH system is expecting to see. A 9 indicates a numeric value and an X indicates an
alphabetic value. If you put a letter in a PIC 9 position, the system will reject the field. If you see a number in parentheses
after the X or 9, that indicates the number of characters in that field. For example 9(10) means that field contains 10
numeric characters.'
The first position in the file is supposed to have content 1 in Picture format of size 1. I don't understand what do I need to do to fix this?
I finally downloaded a Hex file explorer and saw that the valid ACH file and my file both had different first characters. I found out that the ACH file needs the data in the ASCII format. All I had to do was when I populated the ACH file with data, I converted the data to ASCII before writing it.
I want to user ghostscript to optimize pdf files.
My files are generated by iText, and there is font which is embeded too many times - 3000+;
I want to repring document with ghostscript, which will remove all embeded and embed it only once in file.
Do you know how to do it ?
And additional question - is there any difference detween ghostscript and ghost4j ?
Thanks
You cannot do that, most likely. Without seeing the file I cannot be certain, but the probability is that each font is embedded as a subset. That is it contains just a few of the glyphs that were originally present in the font.
So if the first instance contains, say a, c, f and g and the second instance contains b, e and h you can see that the two fonts are actually different.
Worse, the text is usually re-encoded, so the character codes are not what you would expect. In the example above, 'a' would not have the character code 0x61 (ASCII for 'a'), it would have the character code 1. c would be 2, f would be 3 and so on. But in the second case, character code 1 would be 'b', character code 2 would be 'e' etc.
There's no easy way to recombine the multiple font subsets, and also re-encode each set of text to the 'combined' font.
Where the pdfwrite device detects multiple subset fonts which have compatible encodings (the character codes used are unique in each font) it will combine them together. It won't attempt to re-encode them again though, so if the two fonts use the same character codes (as per my example above) pdfwrite will just emit two fonts.
Assuming you've already tried running the file through pdfwrite and didn't get the result you wanted, then that's it, you can't achieve the result you want with the current code.
Probably you can tell iText not to subset the fonts, which will solve the problem for you at the source, rather than trying to fix it afterwards.
I read that a tar entry type of 'L' (76) is used by gnu tar and gnu-compliant tar utilities to indicate that the next entry in the archive has a "long" name. In this case the header block with the entry type of 'L' usually encodes the name ././#LongLink .
My question is: where is the format of the next block described?
The format of a tar archive is very simple: it is just a series of 512-byte blocks. In the normal case, each file in a tar archive is represented as a series of blocks. The first block is a header block, containing the file name, entry type, modified time, and other metadata. Then the raw file data follows, using as many 512-byte blocks as required. Then the next entry.
If the filename is longer than will fit in the space allocated in the header block, gnu tar apparently uses what's known as "the ././#LongLink trick". I can't find a precise description for it.
When the entry type is 'L', how do I know how long the "long" filename is? Is the long name limited to 512 bytes, in other words, whatever fits in one block?
Most importantly: where is this documented?
Just by observation of a single archive here's what I surmised about the 'L' entry type in tar archives, and the "././#LongLink" name:
The 'L' entry is present in a header for a series of 1 or more 512-byte blocks that hold just the filename for a file or directory with a name over 100 chars. For example, if the filename is 1200 chars long, then the size in the header block will be 1200, and there will be 3 additional blocks with filename data; the last block is partially filled.
Following that series is another header block, in the traditional form - a header with type '0' (regular file) or '5' (directory), followed by the appropriate number of data blocks with the entry data. In the header for this series, the name will be truncated to the first 100 characters of the actual name.
EDIT
See my implementation here:
http://cheesoexamples.codeplex.com/SourceControl/changeset/view/99885#1868643
Note that the information about all of that can be found in the libtar project:
http://www.feep.net/libtar/
The proposed header is libtar.h (opposed to the POSIX tar.h) which clearly includes a long filename, and long symbolic link.
Get the "fake" headers + data for the long filenames/links then the "real" header (except for the actual filename and symbolic link) after that.
HEADER type 'L'
BLOCKS of data with the real long filename
HEADER type 'K'
BLOCKS of data with the real symbolic link
HEADER type '0' (or '5' for directory, etc.)
BLOCKS of data with the actual file contents
Of course, under MS-Windows, you probably won't handle symbolic links, although with Win7 it is said that symbolic links under MS-Windows are working (finally—this is now official in Win10!)
Pertinent definition from libtar.h:
/* GNU extensions for typeflag */
#define GNU_LONGNAME_TYPE 'L'
#define GNU_LONGLINK_TYPE 'K'