How can I fetch XML of a record in Netsuite using SuiteScript? - oracle

I am trying to fetch xml data of a Sales Order record but I can't find a way to do it.
The data appears when we add '&xml=t' to the url of a record.
I want to fetch all the data in XML format into a variable.

Found the answer.
Just use record.load and store it in a variable.
var objRecord = record.load({
type: record.Type.SALES_ORDER,
id: 157,
isDynamic: true,
});
Logging this varialbe won't help, as there is character limit of 3999 characters. You can store it in File Cabinet.

Related

Web.Content calling API service and merging pages with List.Transform started to fail

I created PowerBI report which which is connecting to data source via API service. Returning json contains thousands of entities. API service is called via Web.Content function. API service returns always total record count and so we are able to calculate nr. of pages which has to be called to obtain whole dataset. This report is displaying data from our servicedesk app, which is deployed on many servers and for many customers and use Query parameters to connect to any of these servers.
Detail of Power query is below.
Why am I writing here. This report was working without any issue more than 1,5 year but on August 17th one of servers start causing erros in step Pages where are some random lines (pages) with errors - see attached picture labeled "Errors in step Pages". and this is reason that next step Entities (List.Union) in query is stopping refresh and generate errors with message:
Expression.Error: We cannot apply field access to the type List. Details: Value=[List] Key=requests
What is notable
API service si returning records in the same order but faulty lists are random when calling with same parameters
some times is refresh without any error
The same power query called on another server is working correctly , problem is only with one specific server.
This problem started without notice on the most important server after 1,5 year without any problem.
Here is full text power of query for this main source, which is used later in other queries to extract all necessary data. Json is really complicated and I extract from it list of requests, list of solvers, list of solver groups,.... and this base query and its output is input for many referenced queries.
Errors in step Pages
let
BaseAPIUrl = apiurl&"apiservice?", /*apiurl is parameter - name of server e.g. https://xxxx.xxxxxx.sk/ */
EntitiesPerPage = RecordsPerPage, /*RecordsPerPage is parameter and defines nr. of record per page - we used as optimum 200-400 record per pages, but is working also with 4000 record per page*/
ApiToken = FnApiToken(), /*this function is returning apitoken value which is returning value of another api service apiurl&"api/auth/login", which use username and password in body of call to get apitoken */
GetJson = (QParm) => /*definiton general function to get data from data source*/
let
Options =
[ Query= QParm,
Headers=
[
Accept="application/json",
ApiKeyName="apitoken",
Authorization=ApiToken
]
],
RawData = Web.Contents(BaseAPIUrl, Options),
Json = Json.Document(RawData)
in Json,
GetEntityCount = () => /*one times called function to get nr of records using GetJson, which is returned as a part of each call*/
let
QParm = [pp="1", pg="1" ],
Json = GetJson(QParm),
Count = Json[totalRecord]
in
Count,
GetPage = (Index) => /*repeatadly called function to get each page of json using GetJson*/
let
PageNr = Text.From(Index+1),
PerPage = Text.From(EntitiesPerPage),
QParm = [pg = PageNr, pp=PerPage],
Json = GetJson(QParm),
Value = Json[data][requests]
in Value,
EntityCount = List.Max({ EntitiesPerPage, GetEntityCount() }), /*setup of nr. of records to variable*/
PageCount = Number.RoundUp(EntityCount / EntitiesPerPage), /*setup of nr. of pages */
PageIndices = { 0 .. PageCount - 1 },
Pages = List.Transform(PageIndices, each GetPage(_) /*Function.InvokeAfter(()=>GetPage(_),#duration(0,0,0,1))*/), /*here we call for each page GetJson function to get whole dataset - there is in comment test with delay between getpages but was not neccessary*/
Entities = List.Union(Pages),
Table = Table.FromList(Entities, Splitter.SplitByNothing(), null, null, ExtraValues.Error)
I also tried another way of appending pages to list using List.Generate. This is also bringing random errors in list but
it is bringing possibility to transform to table in contrast with original way with using List.Transform, but other referenced queries are failing and contains on the last row errors
When I am exploring content of faulty page/list extracting it via Add as New Query there are always all record without any fail.....
Source = List.Generate( /*another way to generate list of all pages*/
() => [Page = 0, ReqPageData = GetPage(0) ],
each [Page] < PageCount,
each [ReqPageData = GetPage( [Page] ),
Page = [Page] + 1 ],
each [ReqPageData]
),
#"Converted to Table" = Table.FromList(Source, Splitter.SplitByNothing(), null, null, ExtraValues.Error), /*here i am able to generate table from list in contrast when is used List.Generate*/
#"Expanded Column1" = Table.ExpandListColumn(#"Converted to Table", "Column1"), /*here aj can expand list to column*/
#"Removed Errors" = Table.RemoveRowsWithErrors(#"Expanded Column1", {"Column1"}) /*here i try to exclude errors, but i dont know what happend and which records (if any) are excluded*/
Extracting errored page
and finnaly I am tottaly clueless not able to find the cause of this behavior on this specific server. I tested to call pages which are errored via POSTMAN, I discused this issue with author of API service and He also tried to call this API service with all parameters but server is returning every page OK, only Power query is not able to List.Transform ...
I will be grateful and appreciate any tips or advice or if somebody solved the same issue in the past ....
Kuby
No, each error line of list in step List.Transform coud by extracted as new query and there are all records from one page OK. hmmmm
Finnaly, problem described in this issue was caused by "corrupted" content of returning json. The provider of core system informed me that they found bug and after fixing on the side of servisdesk is everything OK again. I tried to find problem in Power query and problem was in servisdesk. :(

How to load json record to json colum in postgres with apache nifi?

This is my flow file content:
{
"a":"b",
"c":"y",
"d":"z",
"e":"w",
"f":"u",
"g":"v",
"h":"o",
"x":"t"
}
The final result should look like that in Postgres :
| test |
|----------------------------------------------------------------|
|{"a":"b,"c":"y","d":"z","e":"w","f":"u","g":"v","h":"o","x":"t"}|
the table is: json_test
the column name is test
Those steps shows how i tried to solve the problem:
My method was to store the json record in a variable as string with "ExtractText":
the attribute data take only some key-values from the json not the entire record:
data = {"a":"b",
"c":"y",
"d":"z",
"e":"w",
"f":
so i have a problem in the regex expression.
next i used PutSQL with the following SQL statement:
Unfortunately the result isn't the wanted one.
I need to know the exact expression that i should set in ExtractText to get the entire json record in a variable as string.
The sql statement should be:
insert into schema.table_name(column_name) values(the_variable_where the flowfile data was stored)

How to compare Variable with CSV file content in Jmeter

I have HTTP request whose response is below
{
"DATA": {
"G_1": {
"OR_ID": "100400",
"LEGAL_ENTITY": "TEST",
"BUSINESS_UNIT": "TEST BU"
},
"G_2": {
"OR_ID": "100500",
"LEGAL_ENTITY": "Test1 ",
"BUSINESS_UNIT": "Test1 "
},
"G_2": {
"OR_ID": "100100",
"LEGAL_ENTITY": "TEST3 ",
"BUSINESS_UNIT": "Test3"
}
}
I need to get OR_ID from the above response, which I am able to do it using Regular exp extractor.
There is Input CSV file which has multiple rows. For the CSV file, i need to check the OR_ID exists or not in column 2, if exists then I have to take columns 5 and 7 and pass it to my next post request in the body. In CSV the same OR_ID repeated, so i need repeat post request for all the repeated values of OR_ID in csv. CSV file has no header.
441919244,100010,QUTRN,TEST Inc.,100100,TEST,VCG and A, INC,USD,3409.0900,O,ICO-VCG-0140,2019-10-31,52 945,USD,USD,359409.0900,359409.0900,359409.0900,Processed,93901372,File,2019111NG52.csv,
441919028,100400,QUQED,TEST MEDICAL EDUCATION INC.,100020,QUINC,TEST INC.,USD,12.340,O,ICO-INC-8718,2019-10-31,52 729,USD,USD,12.3400,12.3400,12.3400,Processed,93901372,,File,20191113NG52.csv,
Can you please help.
Assuming that you can extract the OR_ID from the JSON response following solution could be useful.
In the CSV Data Set Config Element or Random CSV Data Set Config plugin read the CSV file assign the variable names to the respective columns
variable names = C1,OR_ID,C3,C4,C5,C6,C7,C8,C9
Add a While Controller as a parent to the CSV Data Set config elements and the HTTP Request where you want to send data from the CSV file.
${__jexl3("${OR_ID}"!="EOF")}
This will check EOF in the column 2 of the CSV file. Hence please add
,EOF,,
as the last line of the CSV file.
Add IF controller to the HTTP Request with following condition
${__jexl3("${OR_ID}"=="${OR_ID_J}")}
OR_ID_J is the OR_ID picked from the JASON response.
Use ${C5} and ${C7} in the places where you want to insert the data from the CSV file.
Reset the OR_ID to "" using a JSSR223 Sampler with following
vars.put("OR_ID", "");
Sample Test Plan is available in GitHub

Maximo MAXINTMSGTRK table: How to extract text from MSGDATA column? (HUGEBLOB)

I'm attempting to extract the text from the MSGDATA column (HUGEBLOB) in the MAXINTMSGTRK table:
I've tried the options outlined here: How to query hugeblob data:
select
msg.*,
utl_raw.cast_to_varchar2(dbms_lob.substr(msgdata,1000,1)) msgdata_expanded,
dbms_lob.substr(msgdata, 1000,1) msgdata_expanded_2
from
maxintmsgtrk msg
where
rownum = 1
However, the output is not text:
How can I extract text from MSGDATA column?
It's is possible to do it using Automation script, uncompress data using psdi.iface.jms.MessageUtil class.
from psdi.iface.jms import MessageUtil
...
msgdata_blob = maxintmsgtrkMbo.getBytes("msgdata")
byteArray = MessageUtil.uncompressMessage(msgdata_blob, maxintmsgtrkMbo.getLong("msglength"))
msgdata_clob = ""
for symb1 in byteArray:
msgdata_clob = msgdata_clob + chr(symb1)
It sounds like it's not possible because the value is compressed:
Starting in Maximo 7.6, the messages written by the Message Tracking
application are stored in the database. They are no longer written as
xml files as in previous versions.
Customers have asked how to search and view MSGDATA data from the
MAXINTMSGTRK table.
It is not possible to search or retrieve the data in the maxintmsgtrk
table in 7.6.using SQL. The BLOB field is stored compressed.
MIF 7.6 Message tracking changes

nifi extracttext from a JSON attribute that is commar delimited

Hi I am fairly new to apache nifi and I need to extract a suburb from a json string
flowfile looks like this
\/
{"UserName:"John Doe", "Address":"22 smith st, Smithville, NSW","IP":"10.10.10.1}
The suburb is always the 2nd last value in the commar delimited list of the "address" attribute. Sometime it wont be in the 2nd position from left as there might be something like
{"UserName:"John Doe", "Address":"Level 10, 22 smith st, Smithville, NSW","IP":"10.10.10.1}
I have tried to use extract text with regex [^,]+(?=,[^,]*$)
but was not able to make it extract the attribute correctly.
I think you haven't use extractText for extract the Json values and it is not proper way to do it.
You can EvaluateJsonPath processor for extract "Address" of the Json attribute with help of following configurations.
Just configure those attributes for destination to be "flowfile-content" and Return Type to be "Json".
Now you have to add new property named "Address":$.Address.
Here you can receive address of json will be stored in attribute named "Address" then you can extract 2nd column of the suburb present in Address like ${Address:substringBeforeLast(','):substringAfterLast(',')}.
Look at this expression guide which may useful for you.
https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#substringafterlast
https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#substringbeforelast

Resources