I have created an ach file which, in a text editor looks exactly like a valid ach file. When I open it in an ACH viewer tool I get an error saying that the first character must be 1. I found this in the NACHA file specs 'Picture: This is the type of bit the ACH system is expecting to see. A 9 indicates a numeric value and an X indicates an
alphabetic value. If you put a letter in a PIC 9 position, the system will reject the field. If you see a number in parentheses
after the X or 9, that indicates the number of characters in that field. For example 9(10) means that field contains 10
numeric characters.'
The first position in the file is supposed to have content 1 in Picture format of size 1. I don't understand what do I need to do to fix this?
I finally downloaded a Hex file explorer and saw that the valid ACH file and my file both had different first characters. I found out that the ACH file needs the data in the ASCII format. All I had to do was when I populated the ACH file with data, I converted the data to ASCII before writing it.
Related
I got a CSV file from my front-end as a XString and after I convert it into String it looks as follows:
In the next step I'm trying to perform SPLIT lv_string AT '##' INTO TABLE itab so I can get my data but it doesn't split anything, itab contains one line equal to lv_string.
If I try REPLACE '#' IN lv_string WITH space, lv_string doesn't change and sy-subrc is 4.
From my point of view I have this problem because the symbol # is used by SAP in this context as a symbol for non-printable symbols (that result from the conversion byte->string).
My question is: how may I use SPLIT/REPLACE with # in this case?
I also thought that I can change the SAP code page when converting XString to String but I already use the SAP code page 4110 (utf-8) and don't know a better alternative...
When you display a variable with the debugger, it displays the generic character # (U+0023) for all control characters which are not assigned a glyph ("non-printable symbols" as you say).
If the variable corresponds to the contents of a text file, and ## frequently occurs, there is a big chance that it's the combination of the control characters U+000D and U+000A which correspond to "newline" in Windows files.
In the backend debugger, you can check the hexadecimal values of those characters by clicking the button "Hexadezimal" (shown in your screenshot).
You may use the variable CL_ABAP_CHAR_UTILITIES=>CR_LF which contains those two control characters.
I'm developing a software that stores its data in a binary file format. However, as a courtesy to innocent shell users that might cat to inspect the contents of such a file, I'm thinking of having an ASCII-compatible "magic string" in the start of the file that tells the name and the version of the binary format.
I'm thinking of having at least ten rows (\n) in the message so that head by default settings doesn't hit the binary part.
Now, I wonder if there is any control character or escape code that would hint to the shell that the following content isn't interpretable as printable text, and should be just ignored? I tried 0x00 (the null byte) and 0x04 (ctrl-D) but they seem to be just ignored when catting the file.
Cat regards a file as text. There is no way you can trigger an end-of-file, since EOF is not actually any character.
The other way around works of course; specifying a format that only start reading binary format from a certain character on.
I have created layout based on cobol copybook.
Layout snap-shot:
I tried to load data also selecting same layout, it gives me wrong result for some columns. I try using all binary numeric type.
CLASS-ORDER-EDGE
DIV-NO-EDG
OFFICE-NO-EDG
REG-AREA-NO-EDG
CITY-NO-EDG
COUNTY-NO-EDG
BILS-COUNT-EDG
REV-AMOUNT-EDG
USAGE-QTY-EDG
GAS-CCF-EDG
result snapshot
Input file can be find below attachment
enter link description here
or
https://drive.google.com/open?id=0B-whK3DXBRIGa0I0aE5SUHdMTDg
Expected output:
Related thread
Unpacking COMP-3 digit using Java
First Problem you have done an EBCDIC --> ascii conversion on the file !!!!
The EBCDIC --> ascii conversion will also try and convert binary fields as well as text.
For example:
Comp-3 value hex hex after Ascii conversion
400 x'400c' x'200c' x'40' is the ebcdic space character
it gets converted to the ascii
space character x'20'
You need to do binary transfer, keeping the file as ebcdic:
Check the file on the Mainframe if it has a RECFM=FB you can do a transfer
If the file is RECFM=VB make sure you transfer the RDW (Record Descriptor word) (or copy the VB file to a FB file on the mainframe).
Other points:
You will have to update RecordEditor/JRecord
The font will need to be ebcdic (cp037 for US ebcdic; for other lookup)
The FileStructure/FileOrganisation needs to change (Fixed length / VB)
Finally
BILS-Count-EDG is either 9 characters long or starts in column 85 (and is 8 bytes long).
You should include Xml in as text not copy a picture in.
In the RecordEditor if you Right click >>> Edit Record; it will show the fields as Value, Raw Text and Hex. That is useful for seeing what is going on
You do not seem to accept many answers; it is not relevant whether the answer solves your problem; it is whether the answer is correct answer for the question.
I have used this code in msdn (Obtaining a file name from a file handle) to obtain the file name of a file handle that i got from findfirstchangenotification.
But now the problem is that the encoding of the resulting string is somehow wrong. I just see one character instead of all characters (usually a question mark).
So my code calls GetMappedFileName, and gets question marks.
if (GetMappedFileName (GetCurrentProcess(),
pMem,
pszFilename,
MAX_PATH))
Why?
You are calling the 'A' form of GetMappedFileName, which can only deliver characters in your current ACP. Your filename has characters not in the current ACP, so they get turned to question marks.
If the file name includes Unicode characters that have no representation in your current ACP, you will get question marks. You should call the 'W' form of the API to get the Unicode form of the file name, and then decide what you want to do with it.
I am using SAS's FILE statement to output a text file having fixed format (RECFM=F).
I would like each row to end in a end-of-line control character(s) such as linefeed/carriage return. I tried the FILE statement's option TERMSTR=CRLF but still I see no end-of-line control characters in the output file.
I think I could use the PUT statement to insert the desired linefeed and carriage return control characters, but would prefer a cleaner method. Is it a reasonable thing to expect of the FILE statement? (Is it a reasonable expectation for outputting fixed format data?)
(Platform: Windows v6.1.7600, SAS for Windows v9.2 TS Level 2M3 W32_VSPRO platform)
Do you really need to use RECFM=F? You can still get fixed length output with V:
data _null_;
file 'c:\temp\test.txt' lrecl=12 recfm=V;
do i=1 to 5;
x=rannor(123);
put #1 i #4 x 6.4;
end;
run;
By specifying where you want the data to go (#1 and #3) and the format (6.4) along with lrecl you will get fixed length output.
There may be a work-around, but I believe SAS won't output a line-ending with the Fixed format.