InputRoot and InputBody are NULL for MQ ESQL ComputeNode - ibm-mq

I've been tasked with creating a new flow and for some reason I can't access the data that is coming in from the 'IN' Queue. I'm using MessageBrokerToolkit 7.0.0.1 in windows. The test messages are the same ones that work in production.
CREATE COMPUTE MODULE FLOW_Compute
CREATE FUNCTION Main() RETURNS BOOLEAN
BEGIN
DECLARE TMP ROW PASSTHRU('SELECT RAWTOHEX(UTL_RAW.CAST_TO_RAW(DBMS_OBFUSCATION_TOOLKIT.md5(INPUT_STRING => CURRENT_TIMESTAMP)))UNIQUE_ID FROM DUAL');
DECLARE blobMSG BLOB InputRoot.BLOB.BLOB;
DECLARE MSG CHARACTER CAST(blobMSG AS CHARACTER CCSID InputRoot.MQMD.CodedCharSetId ENCODING InputRoot.MQMD.Encoding);
DECLARE TITLE CHAR InputRoot.XML.Request."MessageID";
PASSTHRU(
'INSERT INTO PA0101.DEBUG_TABLE VALUES(?,?,?)',
TMP.UNIQUE_ID,
TITLE,
MSG,
);
RETURN TRUE;
END;
The DEBUG_TABLE rows come out like:
(pipe delimited)
F69A159|||
11C7EBF|||
1077ADD|||
Here is a sample message:
<Request>
<MessageID>a1f5298a-e339-423b-ac9a-4654cb46e965</MessageID>
<SendResponse>false</SendResponse>
<BasicElements>
<FeedType>Realtime</FeedType>
<MsgDT>08/09/2015</MsgDt>
<Category>Action</Category>
<PriorityCd>1</PriorityCd>
<SubjectTx>This is important</SubjectTx>
<DetailTx>[lots of html]</DetailTx>
</BasicElements>
</Request>
When I try to run command line utils on the server I usually get:
<command>
ld.so.1: <command>: fatal: libjvm.so: open failed: No such file or directory
Killed
The code doesn't produce any warnings and the .bar file builds + deploys so I am at a loss for what could be going wrong.

Most likely you are not using the message domain set on the input node in your ESQL.
You cannot use both the XML and the BLOB domain at the same time, the input message will be parsed in one or the other domain, as configured on the input node (or ResetContentDescriptor nodes in your flow).
So either InputRoot.BLOB or InputRoot.XML will be NULL. And I think that you are actually using the XMLNSC domain in your input node and the messsage body parsed in that domain can be accessed under InputRoot.XMLNSC.
In case you need the message body as BLOB and parsed as XML in the same flow, you should set the message domain as BLOB in the input node and parse the message latter in the flow by using the ResetContentDescriptor node, or by using the PARSE option of the CREATE in ESQL:
http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/SSKM8N_8.0.0/com.ibm.etools.mft.doc/ak04950_.htm

Related

Rsyslog omprog pass message to scripts

Accurately, I want to filter logs and send some warning email.
Firstly, I tried ommail, but unfortunately, this module only support mail server which do not need authentication, but my mail server needs.
So I tried to use omprog, I wrote a python script to logon to my mail server, it will recieve one parameter which is the log and send it as mail body.
Then I got the problem, I cannot pass the log to my script, if I try like this, $msg will be recognized as a string .
if $fromhost-ip == "x.x.x.x" then {
action(type="omprog"
binary="/usr/bin/python3 /home/elancao/Python/sendmail.py $msg")
}
I tried to search the official doc.
module(load="omprog")
action(type="omprog"
binary="/path/to/log.sh p1 p2 --param3=\"value 3\""
template="RSYSLOG_TraditionalFileFormat")
but in the sample, what they are using is a string "p1", not a dynamic parameter.
Can you please help? Thanks a lot!
The expected use of omprog is for your program to read stdin and there it will find the full default RSYSLOG_FileFormat template data (with date, host, tag, msg). This is useful, as it means you can write your program so that it is started only once, and then it can loop and handle all messages as they arrive.
This cuts down on the overhead of restarting your program for each message, and makes it react faster. However, if you prefer, your program can exit after reading one line, and then rsyslog will restart it for the next message. (You may want to implement confirmMessages=on).
If you just want the msg part as data, you can use template=... in the action to specify your own minimal template.
If you really must have the msg as an argument, you can use the legacy filter syntax:
^program;template
This will run program once for each message, passing it as argument the output of the template. This is not recommended.
if omprog script is not doing or not saving to a file the problem is that :
rsyslog is sending the full message to that script so you need to define or use a template
your script needs to listen to and return an
example in perl whit omprog
#my $input = join( '-', #ARGV ); ///not working I lost 5 hours of my life
my $input = ; now this is what you need
Hope this what the perl/python/rsyslog community needs.

Protocol Buffers between two different languages

We are using Golang and .NET Core for our inter-communication microservices infrastructure.
All the data across the services are coming based on Protobuffs Protocols that we have created.
Here is an example of one of our Protobuffs:
syntax = "proto3";
package Protos;
option csharp_namespace = "Protos";
option go_package="Protos";
message EventMessage {
string actionType = 1;
string payload = 2;
bool auditIsActive = 3;
}
Golang is working well and the service is generating the content as needed and sending it to the SQS queue, once that happens the .NET core service is getting the data and trying to serialize it.
Here are the contents of the SQS message example:
{"#type":"type.googleapis.com/Protos.EventMessage","actionType":"PushPayload","payload":"<<INTERNAL>>"}
But we are getting an Exception that saying the wire-type is not defined as mentioned below:
Google.Protobuf.InvalidProtocolBufferException: Protocol message contained a tag with an invalid wire type.
at Google.Protobuf.UnknownFieldSet.MergeFieldFrom(CodedInputStream input)
at Google.Protobuf.UnknownFieldSet.MergeFieldFrom(CodedInputStream input)
at Google.Protobuf.UnknownFieldSet.MergeFieldFrom(UnknownFieldSet unknownFields, CodedInputStream input)
at Protos.EventMessage.MergeFrom(CodedInputStream input) in /Users/maordavidzon/projects/github_connector/GithubConnector/GithubConnector/obj/Debug/netcoreapp3.0/EventMessage.cs:line 232
at Google.Protobuf.MessageExtensions.MergeFrom(IMessage message, Byte[] data, Boolean discardUnknownFields, ExtensionRegistry registry)
at Google.Protobuf.MessageParser`1.ParseFrom(Byte[] data)
The Proto file is exactly the same in both of the services.
Is there any potential missing options or property that we need to add?
It looks like you're using the JSON format rather than the binary format. In that case, you want ParseJson(string json), not ParseFrom(byte[] data).
Note: the binary format is more efficient, if that matters to you. It also has better support across protobuf libraries / tools.
Basically there are two possible scenarios, or your protos files generated for .NET and GoLang are not in the same version or your data has been corrupted while transferring between GoLang and .NET application.
Protobuf is a binary protocol, check if you have any http filter or anything else that can change incoming or outgoing stream of bytes.

ExecutionScript output two different flowfiles NIFI

I'm using executionScript with python and I'm having a dataset which it may have some corrupted data, my idea is to process the good data, and put it in my flowfile content to my success relationship and the corrupted one redirect them in the failure relationship, I have done something like this :
for msg in messages :
try :
id = msg['id']
timestamp = msg['time']
value_encoded = msg['data']
hexFrameType = '0x'+value_encoded[0:2]
matches = re.match(regex,value_encoded)
....
except:
error_catched.append(msg)
pass
any idea how can I do that ?
For the purposes of this answer I am assuming you have an incoming flow file called "flowFile" which you obtained from session.get(). If you simply want to inspect the contents of flowFile and then route it to success or failure based on an error occurring, then in your success path you can use:
session.transfer(flowFile, REL_SUCCESS)
And in your error path you can do:
session.transfer(flowFile, REL_FAILURE)
If instead you want new files (perhaps one containing a single "msg" in your loop above) you can use:
outputFlowFile = session.create(flowFile)
to create a new flow file using the input flow file as a parent. If you want to write to the new flow file, you can use the PyStreamCallback technique described in my blog post.
If you create a new flow file, be sure to transfer the latest version of it to REL_SUCCESS or REL_FAILURE using the session.transfer() calls described above (but with outputFlowFile rather than flowFile). Also you'll need to remove your incoming flow file (since you have created child flow files from it and transferred those instead). For this you can use:
session.remove(flowFile)

can't connect LocalData Adapters to Schemas

This is My Situation
I have a Service Named BaseDataService, in here I create a LocalDataAdapter that I'm gonna use to connect to a specific service called DataBaseLayer. The service I'm trying to connect contains the schemas that have all datatables needed to use.
Then I create a another service that descends from BaseDataService, so it contains the LocalDataAdapter mentioned earlier. The problem is that after configuring the local data adapter I can't open the datatables that are in the DataBaseLayer service. Posting the code:
Procedure TBaseDataService.ConnectDatabaseLayerToAdapter
begin
DataBaseLayerAdapter.ServiceInstance := DatabaseLayerService as IDataAbstractLocalServiceAccess
end
Procedure TBaseDataService.DataAbstractServiceCreate(Sender: TObject);
begin
DataBaseLayerAdapter.ServiceName := ' ';
end
function TBaseDataService.GetDataBaseLayerService: IDataBaseLayerService;
begin
if not Assigned(FDatabaseLayerService) then
FDataBaseLayerService := (CreateAndConnectService('DataBaseLayerService') as IDataBaseLayerService);
Result := FDataBaseLayerService;
end
ConnectDataBaseLayerToAdapter;
tbl_SA_Receipts.Open;
Note: The last part is where I try to connect to DataBaseLayerService.
At first I got this error:
"An exception was Raised on the server: Acces Violation ad Adress
014FD9C0 in Module .... Read of address 0000098"
I manage to get this part fixed after a lot working, but now the problem I have is when I assigned the server instance, it is assigned as null, can't figure out the reason why, since in the code above does that, been here for a while now but can't get past this part.
Manage to fix this:
in order to use local data adapter to connect to another service containing datatables, the service you are conecting to must be a descendant of TDataAbstractService,otherwise it will return a read of acces error.
the code making the connection is actually correct.

ORA-28579: network error during callback from external procedure agent

Has anyone seen this error when trying to call an external C function from an Oracle query? I'm using Oracle 10g and get this error every time I try to call one of the two functions in the library. A call to the other function returns fine every time, though the function that works is all self-contained, no calls to any OCI* functions.
Here's the stored procedure that is used to call the failing C code:
CREATE OR REPLACE PROCEDURE index_procedure(text in clob, tokens in out nocopy clob, location_needed in boolean)
as language c
name "c_index_proc"
library lexer_lib
with context
parameters
(
context,
text,
tokens,
location_needed
);
Any help would be appreciated. Everything I've found on this error message says that the action to take is: Contact Oracle customer support.
Edit: I've narrowed it down to the point that I know that there is a segfault deep in libclntsh after I call OCILobTrim (to truncate it down to 0 length) on the tokens clob. Here is the code I've been using to call this procedure.
declare text CLOB; tokens CLOB;
begin
dbms_lob.createtemporary(tokens, TRUE);
dbms_lob.append(tokens, 'token');
dbms_lob.createtemporary(text, TRUE);
dbms_lob.append(text, '<BODY>Test Document</BODY>');
index_procedure(text, tokens, FALSE);
dbms_output.put_line(tokens);
end;
/
Is there something wrong with this setup that might be causing OCILobTrim problems?
It looks like this is one of those errors that essentially means any number of things could have gone wrong with the external procedure.
There is a known bug in 10.2.0.3, no idea if it's relevant:
ORA-28579 occurs when trying to select
data from a pipelined table function
implemented in "C" using the
ODCITable/ANYDATASET interface.
ODCITableDescribe works fine but
ODCITableFetch generates an ORA-28579
error.
I would suggest:
Look in the database server
trace directories, and the directory
where the external proc is located,
for any log or trace files generated
when the error occurs.
Instrument your external proc in
some way so that you can try to
trace its execution yourself.
Contact Oracle support
Well, an upgrade to 10.2.0.4 (was using 10.2.0.1) at least gave me an understandable error instead of a fairly useless core file and the ORA-28579.
It turns out that the code I was debugging was assuming that calling OCILobRead would return all of the data in one pass. This is the case for any client using a fixed width character set.
For clients using a variable width character set, this isn't the case, OCILobRead was actually reading part of the data and returning OCI_NEED_DATA and future calls to OCILobTrim and OCILobWrite were failing because of the still pending call to OCILobRead. The solution was to loop OCILobRead calls until OCI_NEED_DATA was no longer returned and we had all of the needed data in our buffer.
A call to OCIBreak also would have allowed the OCILobTrim and OCILobWrite functions to continue, though we wouldn't have had all of the needed input data.

Resources