Acces XMLNSC tag value in ESQL - ibm-integration-bus

I need to access some XML values and concatenate them as a file name for output document. The problem is that solution demands the message to be read in BLOB format so ESQL script must first translate the blob to CHARACTER/XMLNSC. See the code bellow. The ESQL code ends up with error when I run it in the message flow and resulting file is named just ".xml". I'm using IBM Integration Toolkit 12.
Code
DECLARE CCSID INT InputRoot.Properties.CodedCharSetId;
DECLARE encoding INT InputRoot.Properties.Encoding;
DECLARE bitStream BLOB ASBITSTREAM(InputRoot.BLOB.BLOB, encoding, CCSID);
CREATE LASTCHILD OF Environment.tempXML DOMAIN('XMLNSC') PARSE(bitStream, encoding, CCSID,'BLOB', 'XMLNSC');
DECLARE seorno CHARACTER;
DECLARE sejobn CHARACTER;
SET seorno = FIELDVALUE(Environment.tempXML.ROOT.(XML.Element)SEORNO);
SET sejobn = FIELDVALUE(Environment.tempXML.ROOT.(XML.Element)SEJOBN);
SET OutputLocalEnvironment.Destination.File.Name = seorno || '-' || sejobn || '.xml';

I have just found an answer
CREATE LASTCHILD OF InputRoot DOMAIN('XMLNSC') PARSE(InputRoot.BLOB.BLOB, InputRoot.Properties.Encoding, InputRoot.Properties.CodedCharSetId);
DECLARE seorno CHARACTER;
DECLARE sejobn CHARACTER;
SET seorno = FIELDVALUE(InputRoot.XMLNSC.ROOT.(XMLNSC.Field)SEORNO);
SET sejobn = FIELDVALUE(InputRoot.XMLNSC.ROOT.(XMLNSC.Field)SEJOBN);
SET OutputLocalEnvironment.Destination.File.Name = seorno || '-' || sejobn || '.xml';

You have probably worked this out already, but you cannot use field type constants like XML.Element with the XMLNSC parser. You must always use constants prefixed with 'XMLNSC'.
In case it helps, you can make your code more compact by initialising the variables as part of the DECLARE statement:
DECLARE seorno CHARACTER FIELDVALUE(InputRoot.XMLNSC.ROOT.(XMLNSC.Field)SEORNO);

Related

Oracle PL/SQL Query With Dynamic Parameters in Where Clause

I'm trying to write a dynamic query that could have a different amount of parameters of different type. The only issue I'm having is handling if the value is a string therefore needing single quotes around it. I am using the value of a field called key_ref_ to determine what my where clause will look like. Some examples are:
LINE_NO=1^ORDER_NO=P6002277^RECEIPT_NO=1^RELEASE_NO=1^
PART_NO=221091^PART_REV=R02^
At the moment I am replacing the '^' with ' and ' like this:
REPLACE( key_ref_, '^' ,' and ' );
Then I'm trying to create the dynamic query like this:
EXECUTE IMMEDIATE
'select '||column_name_||' into column_ from '||base_table_||' where '||
key_ref_ || 'rownum = 1';
This won't work in cases where the value is not a number.
Also I only added "rownum = 1" to handle the extra 'and' at the end instead of removing the last occurence.
If the input will not have the tild symbol(~) then you can try the below code.
if the input has tild, you can replace it with some other value which should not be there in input
considering the input provided in the example..
LINE_NO=1^ORDER_NO=P6002277^RECEIPT_NO=1^RELEASE_NO=1^PART_NO=221091^PART_REV=R02^
use the below code
replace(replace(replace('LINE_NO=1^ORDER_NO=P6002277^RECEIPT_NO=1^RELEASE_NO=1^PART_NO=221091^PART_REV=R02^','^','~ and '),'=','=~'),'~',q'[']')
and the result would be
LINE_NO='1' and ORDER_NO='P6002277' and RECEIPT_NO='1' and RELEASE_NO='1' and PART_NO='221091' and PART_REV='R02' and
System will type cast the number fields so, there would not be any issue.

How to discard Oracle UTF-8 characters when writing to a file with utl_file.put_line

I am using the following code to write to a file in Oracle PL/SQL.
l_file := utl_file.fopen('HR_OUT', 'TRUMEDAID.txt', 'w');
utl_file.put_line
(
l_file,
utl_raw.cast_to_varchar2
(
utl_raw.convert
(
utl_raw.cast_to_raw(rec_text),
'AMERICAN_AMERICA.WE8ISO8859P1', -- To character set.
'AMERICAN_AMERICA.AL32UTF8' -- From Character set.
)
)
);
However this does not discard the UTF-8 characters but instead translates Pamêla&~ into Pam�&~. Is there another way that would at least give Pam�la&~? Why isn't the ascii character ê used?
Not sure why you are casting to RAW and back again. You can use the CONVERT function. The input string can be CHAR,VARCHAR2,CLOB etc. See: http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions027.htm#SQLRF00620
You only need to specify the destination character set since it takes the database character set as the default source.
Also, in these cases, never trust the resulting string of letters. Use the dump() function to investigate the byte values that make up the string. This way you can determine if the string is made up of the correct values.
select utl_raw.cast_to_varchar2
(
utl_raw.convert
(
utl_raw.cast_to_raw(first_name),
'AMERICAN_AMERICA.US7ASCII', -- To character set.
'AMERICAN_AMERICA.AL32UTF8' -- From Character set.
)
)
from per_all_people_f
where employee_number = '212164'
worked for me

Read using CSVREAD with non-printing characters as field and record separators

I have a file that I would like to read in H2 that uses FIELD(ASCII code 31) & RECORD(ASCII code 30) as the field and record separators in my file. I've tried this but it's not working...
SELECT * FROM CSVREAD('test.csv', null, 'rowSeparator=' || CHAR(30) || 'fieldSeparator=' || CHAR(31));
How do I need to format this to read from my file?
EDIT I
This parses the fields out correctly but the rows aren't being parsed out...not sure why:
SELECT * FROM CSVREAD('C:\Users\zmacomber\ReceiptPrinter\data\bak\address.dat', null, STRINGDECODE('charset=UTF-8 rowSeparator=' || CHAR(30) || ' fieldSeparator=' || CHAR(31)));
Looking at the source code of the CSV tool, unfortunately you can not currently change the row separator used for reading (parsing). The row separator is only used for writing, not for reading. For reading, you would need to use \n, \r, or a combination of both.
I understand this is unexpected, but that's the way it currently is.

How to download data with conversion exit?

I try to download an internal table to my PC and the download should use the conversion exits.
An example:
Table T002 contains a language key with one character (T0002-SPRAS).
When I WRITE T0002-SPRAS. the conversion routine ISOLA is used and I get a two character language key (E becomes EN...)
This conversion routine should be used for my export.
My test report:
REPORT Y_MY_DOWNLOAD_TEST.
CONSTANTS: c_filename type string VALUE 'C:\temp\test.txt'.
data: it_table type table of t002.
start-of-selection.
SELECT * from t002 into table it_table.
* Start file download
CALL METHOD cl_gui_frontend_services=>gui_download
EXPORTING
filename = c_filename
filetype = 'ASC' "or DAT
WRITE_FIELD_SEPARATOR = 'X'
WRITE_LF = 'X'
* DAT_MODE = SPACE
codepage = '4110' "UNICODE
SHOW_TRANSFER_STATUS = 'X'
WRITE_LF_AFTER_LAST_LINE = 'X'
CHANGING
data_tab = it_table
EXCEPTIONS
OTHERS = 99.
write: 'end'.
The result is a file without the usage of the conversion exit (English keeps E).
The SAP documentation mention the parameter dat_mode:
If this flag is set, .... Conversion exits are not executed.
I don't set the flag,so I would expect, the conversions are done.
I tried different combinations (dat_mode on/off, filetype ASC and DAT), but I found never a conversion.
Remarks:
I use SAP Release 7.01, Support Package SAPKB70107. It is a unicode system.
T002 is only an example, my real data are other data, containing language key.
I'm looking for a solution with gui_download (or another standard method/function module).
I don't want to build my own export file like this:
data:
tmp type string,
targetline type string,
targettable type table of string.
loop at it_table into sourceline.
"This could be done dynamic with field symbols and ASSIGN COMPONENT
write sourceline-field1 to tmp.
CONCATENATE targetline ';' tmp into targetline.
"...
APPEND targetline to targettable.
endloop.
This will be a possible solution, but in this case it would be easier for me to adapt the consumer of the export file.
I don't think it's possible. You could however join the LAISO value (which is the value the SPRAS output conversion function returns) in your queries which include an SPRAS type of field, and use a custom type for the query in which you replace the SPRAS type field with the LAISO type.
Here's an example using the T003P table:
types: begin of ty_t003p,
client type mandt,
spras type laiso,
auart type aufart,
txt type auarttext,
end of ty_t003p.
data ta_t003p type standard table of ty_t003p.
select t003p~client t002~laiso t003p~auart t003p~txt into table ta_t003p from t003p inner join t002 on t002~spras = t003p~spras.
cl_gui_frontend_services=>gui_download(
exporting
filename = 'C:\temp\test.txt'
filetype = 'DAT'
changing
data_tab = ta_t003p ).
Okay,
here goes, use SE11, go to the table, double click the data element with the conversion routine to display the data element. Then double click the domain to display the domain then double click the convers. routine name. ( ISOLA in this case ) Since you want the output value ( the input value is in the db ) you want to execute CONVERSION_EXIT_ISOLA_INPUT on the spras field for each table entry.
Something like
data: wa_table type t002.
loop at it_table into wa_table.
CALL FUNCTION 'CONVERSION_EXIT_ISOLA_OUTPUT'
EXPORTING
input = wa_table-spras
IMPORTING
OUTPUT = wa_table-spras.
modify it_table from wa_table index sy-tabix.
endloop.
At this point you can just continue using cl_gui_frontend_services=>gui_download on it_table.
I realize this is close to using your WRITE statement, except that the WRITE statement would get you in trouble.
What we have done at my work is write a program that uses the data dictionary to generate an upload download program.
Table DD04L contains the conversion exit for each table field, and then we do something likes this :
CONCATENATE 'wa_db-' wa_field-fieldname INTO g_string.
SELECT SINGLE * FROM dd03l INTO wa_dd03l WHERE tabname EQ p_tab AND fieldname EQ wa_field-fieldname.
SELECT SINGLE * FROM dd04l INTO wa_dd04l WHERE rollname EQ wa_dd03l-rollname.
IF wa_dd04l-lowercase IS INITIAL.
_repl 'translate wa_field to upper case.' g_string.
ENDIF.
_add 'if g_oops is initial.'.
IF wa_dd04l-convexit IS NOT INITIAL.
_add 'try.'.
_repl 'move wa_field to &.' g_string.
_add 'CATCH CX_DYNAMIC_CHECK into gcl_dynamic_check.'.
_add 'l_error = gcl_dynamic_check->get_text( ).'.
_add 'l_long_error = gcl_dynamic_check->GET_LONGTEXT( ).'.
_repl 'concatenate ''Conversion error'' wa_field ''into & ->'' l_error into l_error separated by space.' g_string.
_add 'condense l_error.' .
_add 'write l_error. new-line.' .
_add 'write l_long_error. new-line.' .
_add 'ENDTRY.'.
CONCATENATE 'CONVERSION_EXIT_' wa_dd04l-convexit '_INPUT' INTO g_fm.
_repl ' CALL FUNCTION ''&''' g_fm.
_add ' EXPORTING'.
_repl ' input = &' g_string.
_add ' IMPORTING'.
_repl ' output = &' g_string.
_add ' EXCEPTIONS'.
_add ' length_error = 1'.
_add ' OTHERS = 2.'.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
with some defines which add code to the generated ABAP
DEFINE _repl.
wa_prog = &1.
replace all occurrences of '&' in wa_prog with &2.
append wa_prog to it_prog.
END-OF-DEFINITION.
DEFINE _add.
append &1 to it_prog.
END-OF-DEFINITION.
It's a ton of fun to write..
As of ABAP 7.52, I have verified that the conversion exits are executed only when:
The parameters are either filetype = 'DAT'
or filetype = 'ASC' and dat_mode = 'X'
(not really what the documentation says)
And only for fields whose data types are 'N', 'D', 'F' (except value 0) or 'T' (but not 'C' which is the most frequent case, especially concerning the conversion exits ALPHA, ISOLA and CUNIT)
(you may verify these rules on data types in the subroutine Put_Char_LineBuffer of the function group SFES, and more specifically in the subroutine ConvertAsc)
Note that the documentation embedded in the method gui_download of cl_gui_frontend_services says:
DAT mode: No longer supported. Use ASC instead.
So you cannot rely on the conversions done by the method.

Character to number conversion error

declare
l_tot number := 0;
begin
for i in 1..apex_application.g_f08.count loop
l_tot := l_tot + nvl(to_number(apex_application.g_f08(i)),0);
end loop;
if l_tot = nvl(to_number(:P21_TOTAL_PRICE),0) then
return true;
else
return false;
end if;
end;
Got below error with above code
ORA-06502: PL/SQL: numeric or value error: character to number conversion error
Error occurred with :P21_TOTAL_PRICE. What is the wrong ? How can i correct this ?
Rather than using REPLACE you should use the more powerful REGEXP_REPLACE function. http://www.orafaq.com/wiki/REGEXP_REPLACE
You can then remove any non-numeric character from the string before then using the TO_NUMBER function.
In your case it would be something like:
REGEXP_REPLACE(:P21_TOTAL_PRICE, '[^0-9]+', '');
See my answer to almost the exact same question here: Oracle To_Char function How to handle if it's already a string
The error rises because the number that you're representing is actually a character string involving commas etc. When you put a to_number to that, Oracle cannot replace the commas.
You might want to use replace function to strip off the commas
Change
if l_tot = nvl(to_number(:P21_TOTAL_PRICE),0) then
to
if l_tot = nvl(to_number(replace(:P21_TOTAL_PRICE,',','')),0) then

Resources