We have a requirement to FTP the batch report to a excel sheet in .csv format. The batch report contains both single byte and double byte characters, for example, English and Chinese. The data in mainframe is in Base64 format and when this is FTP’ed in either Binary or ASCII mode, the resulting .csv spreadsheet shows only junk characters. We need a method to FTP the batch report file, so that the FTP’ed report is in readable format.
Request your help in resolving this issue.
I'm not familiar with Chinese character sets but I would think if you're not restricted to CSV, you might try to format an XML document for excel whereby you can specify the fonts as part of the spreadsheet definition.
Assuming that isn't an option I would think the Base64 format might need to be translated to ASCII (from EBCDIC) before transmission and then delivered in BINARY. Otherwise you risk having the data translated to something you didn't expect.
Another way to see what is really happening is send the data as ASCII and retrieve the data as BINARY and then compare the before and after results to see what characters were changed enroute during transmission. I recall having to do something similar to this once to resolve different code sets in Europe vs. U.S.
I'm not sure any of these suggestions would represent a "solution" to your problem, but these would be ideas that I would explore. I would be interested in hearing how you resolve this.
Related
I have a Python program that extracts data from an API, applies transformations, and converts it to a csv to be used in Tableau. When I view the file in excel and Google Sheets, it looks fine. No data formatting or read errors as it is formatted in standard UTF8.
When I read it in Tableau, different story. You will notice how the columns lose shape and get parsed incorrectly.
I am thinking it has to do with the fact that my data set is text heavy and contains punctuation, but I have been able to work with data in this format just fine without having to do any custom formatting.
It looks like your csv has multiline fields (which are quoted).
You'll somehow have to tell the Tableau reader/parser to read your data as quoted (and multiline).
Also check the escaping of the quotes (if they are inside a field) - usually this is done with another quote, but could also be with a backslash.
I have problem with Chinese characters when I export them from Oracle forms 10g to Excel on Windows 7. Although they look like Chinese but they are not Chinese characters. Take this into consideration that I have already changed the language of my computer to Chinese and restarted my computer. I use owa_sylk utility and call the excel report like:
v_url := 'http://....../excel_reports.rep?sqlString=' ||
v_last_query ||
'&font_name=' ||
'Arial Unicode MS'||
'&show_null_as=' ||
' ' ;
web.show_document(v_url,'_self');
Here you can see what it looks like:
Interestingly, when I change the language of my computer to English, this column is empty. Besides, I realized that if I open the file with a text editor then it has the right Chinese word, but when we open it with Excel we have problem.
Does anyone has a clue?
Thanks
Yes, the problem comes from different encodings. If DB uses UTF-8 and you need to send ASCII to Excel, you can convert data right inside the owa_sylk. Use function convert.
For ex. in function owa_sylk.print_rows change
p( line );
on
p(convert(line, 'ZHS32GB18030','AL32UTF8'));
Where 'ZHS32GB18030' is one of Chinese ASCII and 'AL32UTF8' - UTF-8.
To choose encoding parameters use Appendix A
You can also do
*SELECT * FROM V$NLS_VALID_VALUES WHERE parameter = 'CHARACTERSET'*
to see all the supported encodings.
This is a character encoding issue. What you need to make sure is that all tools in the whole chain (database, web service, Excel, text editor and web browser) use the same character encoding.
Changing your language can help here but a better approach is to nail the encoding down for each part of the chain.
The web browser, for example, will prefer the encoding supplied by the web server over the OS's language settings.
See this question how to set UTF-8 encoding (which can properly display Chinese in any form) for Oracle: export utf-8 data to text file with oracle sql developer
I'm not sure how to set the encoding for owa_sylk, you will have to check the documentation (couldn't find any, though). If you can't find anything, ask a question here or use a different tool.
So you need to find out who executes excel_reports.rep and configure that correctly. Use a developer tool of your web browser and check the "charset" or "encoding" of the page.
The problems in Excel are based on the file format which you feed into it. Excel (.xls and .xlsx files) files are Unicode safe, .csv isn't. So if you can read the file in your text editor, chances are that this is a non-Excel file format which Excel can parse but it doesn't contain the necessary encoding information.
If you were able to generate a UTF-8 encoded file with the steps above, you can load the file by using "Choose 65001: Unicode (UTF-8) from the drop-down list that appears next to File origin." in the "Text Import Wizard" (source)
I am processing some CDRs (call detailed record). I dont know which exactly the file it is? But i supposed this to be 'ASN.1' format BER encoded files. Now my problem is that I want to modify some data in this files but I dont know which Editor or decorder I can use to modify this files. I searched a lot and found many ASN.1 Decorder as well as ASN.1 BSR viewer/editor but no one allows what i want to perform.
This CDR is supposed to contain Customer detail, phone number, telecom services(telephony, SMS, MMS) etc.
One of CDR name is - GGSN01_20120105000102_56641-09-12-01-09%3A30
and file type is - File
No other information is available. When I am opening this file in some text editor it show some rectangles and some text data.
Any telecom guy can definite help me. I am new to telecom domain.
Please ask if you need more information. Thanks
You would need to know something about ASN.1 and BER to be able to correctly edit your file. BER is a binary format, not ASCII text, thus what you see in your text editor. Even modifying any embedded plain text is only safe if you are not changing the length of the string; BER uses nested structures that encode lengths and so a change in the length of a string value requires adjustments to the encoded lengths of the enclosing structures. Additionally, in order to really know what your data is, you would need to know the ASN.1 that describes it (defines the types that describe your encoded data).
You could use a tool such as ASN.1 editor, but without the requisite background knowledge, I think it will not be very helpful to you. You can follow various links on this resources page to get more information about ASN.1. (full disclosure: I am currently an Obj-Sys employee).
Look for tools like enber and unber, they come as debugging tools with the fee asn.1-compiler of Lev Walkin. At least you get text-format from them.
The systemic solution is, of course to write a program that reads the BER-file, applies the schnages and then writes out the altered BER-file. To do so you need the ASN.1-Specification file of your CDR-Format (usually to be found in the specifications of the standard e.g. IMS, you are using) an asn1-compiler such as Lev's and some programming skills.
Is there a standard or open format which can be used to describe the formating of a flat file. My company integrates many different customer file formats. With an XML file it's easy to get or create an XSD to describe the XML file format. I'm looking for something similar to describe a flat file format (fixed width, delimited etc). Stylus Studio uses a proprietary .conv format to do this. That .conv format can be used at runtime to transform an arbitrary flat file to an XML file. I was just wondering if there was any more open or standards based method for doing the same thing.
I'm looking for one method of describing a variety of flat file formats whether they are fixed width or delimited, so CSV is not an answer to this question.
XFlat:
http://www.infoloom.com/gcaconfs/WEB/philadelphia99/lyons.HTM#N29
http://www.unidex.com/overview.htm
For complex cases (e.g. log files) you may consider a lexical parser.
About selecting existing flat file formats: There is the Comma-separated values (CSV) format. Or, more generally, DSV. But these are not "fixed-width", since there's a delimiter character (such as a comma) that separates individual cells. Note that though CSV is standardized, not everybody adheres to the standard. Also, CSV may be to simple for your purposes, since it doesn't allow a rich document structure.
In that respect, the standardized and only slightly more complex (but thus more useful) formats JSON and YAML are a better choice. Both are supported out of the box by plenty of languages.
Your best bet is to have a look at all languages listed as non-binary in this overview and then determine which works best for you.
About describing flat file formats: This could be very easy or difficult, depending on the format. Though in most cases easier solutions exist, one way that will work in general is to view the file format as a formal grammar, and write a lexer/parser for it. But I admit, that's quite† heavy machinery.
If you're lucky, a couple of advanced regular expressions may do the trick. Most formats will not lend themselves for that however.‡ If you plan on writing a lexer/parser yourself, I can advise PLY (Python Lex-Yacc). But many other solutions exists, in many different languages, a lot of them more convenient than the old-school Lex & Yacc. For more, see What parser generator do you recommend?
†: Yes, that may be an understatement.
‡: Even properly describing the email address format is not trivial.
COBOL (whether you like it or not) has a standard format for describing fixed-width record formats in files.
Other file formats, however, are somewhat simpler to describe. A CSV file, for example, is just a list of strings. Often the first row of a CSV file is the column names -- that's the description.
There are examples of using JSON to formulate metadata for text files. This can be applied to JSON files, CSV files and fixed-format files.
Look at http://www.projectzero.org/sMash/1.1.x/docs/zero.devguide.doc/zero.resource/declaration.html
This is IBM's sMash (Project Zero) using JSON to encode metadata. You can easily apply this to flat files.
At the end of the day, you will probably have to define your own file standard that caters specifically to your storage needs. What I suggest is using xml, YAML or JSON as your internal container for all of the file types you receive. On top of this, you will have to implement some extra validation logic to maintain meta-data such as the column sizes of the fixed width files (for importing from and exporting to fixed width). Alternatively, you can store or link a set of metadata to each file you convert to the internal format.
There may be a standard out there, but it's too hard to create 'one size fits all' solutions for these problems. There are entity relationship management tools out there (Talend, others) that make creating these mappings easier, but you will still need to spend a lot of time maintaining file format definitions and rules.
As for enforcing column width, xml might be the best solution as you can describe the formats using xml schemas (with the length restriction). For YAML or JSON, you may have to write your own logic for this, although I'm sure someone else has come up with a solution.
See XML vs comma delimited text files for further reference.
I don't know if there is any standard or open format to describe a flat file format. But one industry has done this: the banking industry. Financial institutions are indeed communicating using standardized message over a dedicated network called SWIFT. SWIFT messages were originally positional (before SWIFTML, the XMLified version). I don't know if it's a good suggestion as it's kinda obscure but maybe you could look at the SWIFT Formatting Guide, it may gives you some ideas.
Having that said, check out Flatworm, an humble flat file parser. I've used it to parse positional and/or CSV file and liked its XML descriptor format. It may be a better suggestion than SWIFT :)
CSV
CSV is a delimited data format that has fields/columns separated by the comma character and records/rows separated by newlines. Fields that contain a special character (comma, newline, or double quote), must be enclosed in double quotes. However, if a line contains a single entry which is the empty string, it may be enclosed in double quotes. If a field's value contains a double quote character it is escaped by placing another double quote character next to it. The CSV file format does not require a specific character encoding, byte order, or line terminator format.
The CSV entry on wikipedia allowed me to find a comparison of data serialization formats that is pretty much what you asked for.
The only similar thing I know of is Hachoir, which can currently parse 70 file formats:
http://bitbucket.org/haypo/hachoir/wiki/Home
I'm not sure if it really counts as a declarative language, since it's plugin parser based, but it seems to work, and is extensible, which may meet your needs just fine.
As an aside, there are interesting standardised, extensible flat-file FORMATS, such as IFF (Interchange File Format).
I'm uploading a file that was originally ASCII and converted to EBCDIC from Windows OS to z/OS. My problem is that when I checked the file after uploading it, I see a lot of new lines.
When I tried to check it with its hex dump I discovered that when mainframe sees a x'15' it translates it into a newline. In the file there are packed decimals so the hex could contain let say a x'001500001c' but when I upload it, mainframe mistook it as a new line. Can anyone help me with this problem?
You should put your FTP client (or library if the upload is done by your code) into binary (IMAGE TYPE) mode instead of ascii/EBCDIC if you are sending a file already in EBCDIC i believe.
It depends on the type of target "file" that you're uploading to.
If you're uploading to a member that has fixed block size (e.g., FB80), you'll need to ensure all the lines are padded out with spaces before you transmit it up (in binary mode).
Text mode transfers are not suitable for binary files (and your files are binary if they contain packed decimals - there's no reliable way for FTP to detect real line-end characters).
You'll need to fix your Windows ASCII-to-EBCDIC converter to be able to generate fixed length records.
The only other option is with a REXX script on the mainframe but this would still require being able to tell the difference between a real end-of-line marker and that marker within the binary data.
You could possibly tell the presence of a packed decimal by virtue of the fact that it consisted of BCD nybbles, the last of which is 0xC or 0xD, but that could also cause false positives or negatives.
My advice: when you convert it from ASCII to EBCDIC, pad out the lines to the desired record length at the same time.
The other point I'd like to raise is that if you just want to look at the files on the mainframe (not use them from any code that requires EBCDIC), the ISPF editor includes a few new commands (as of z/OS 1.9 if I remember correctly).
SOURCE ASCII will display the data as ASCII rather than EBCDIC. In addition, the LF command allows you to massage the ASCII stream in an FB member to correctly fix up line endings.