I have an ExecuteSQL processor that returns a SQL Server varbinary field for a particular row:
select [File]
from dbo.Attachment
where attachmentid=?
The query will find one row. The content gets stored in Avro. The retrieved File could be a text format (CSV, HTML, etc) or a binary format (PDF, Office docs, images, etc).
If the content is text, I can run it through ConvertAvroToJSON and then EvaluateJsonPath to get the content that I want. That doesn't work with the binary content, however. When I download the content of a flowfile that has, say, a PowerPoint file, PowerPoint complains about the content.
I'd like to have the Content of my FlowFile be just the binary content (I'll be sending it on to a PutMarkLogic processor later). How can I do that?
I did not test it.
but you could use ExecuteGroovyScript as workaround to write binary field directly to a file content.
SQL.mydb - add this parameter on the level of processor and link it to required DBCP pool.
AttributeWithID - i assume there is a flow file attribute with this name that contains value to be used in sql query for attachmentid
def ff=session.get()
if(!ff)return
SQL.mydb.eachRow("""
select [File]
from dbo.Attachment
where attachmentid=${ff.AttributeWithID}
"""){row->
ff.write{outStream->
outStream << row.getBinaryStream(1)
}
}
REL_SUCCESS << ff
Related
I am building a generic CSV output module with a variable number of columns. The DataFormat in BW (5.14) lets you define repeating item and thus offers a list of items that I could use to map data to in the RenderCSV step.
But when I run this with data for >> 1 column (and loopings) only one column is generated.
Is the feature broken or do I use it wrongly?
Alternatively I defined "enough" optional columns in the data format and map each field separately - no really generic solution.
Looks like In BW 5, when using Data Format and Parse Data to parse text, repeating elements isn’t supported.
Please see https://support.tibco.com/s/article/Tibco-KnowledgeArticle-Article-27133
The workaround is to use Data Format resource, Parse Data and Mapper
activities together. First use Data Format and Parse Data to parse the
text into the xml where every element represents one line of the text.
Then use Mapper activity and tib:tokenize-allow-empty XSLT function to
tokenize every line and get sub-elements for each field in the lines.
The link has also attached workaround implementation
How to read an excel sheet and put the cell value within different text fields through UiPath?
I have a excel sheet as follows:
I have read the excel contents and to iterate over the contents later I have stored the contents in a Output Data Table as follows:
Read Range - Output:
DataTable: CVdatatable
Output Data Table
DataTable: CVdatatable
Text: opCVdatatable
Screenshot:
Finally, I want to read the text opCVdatatable in a iteration and write them into text fields. So in the desired Input fileds I mentioned opCVdatatable or opCVdatatable+ "[k(enter)]" as required.
Screenshot:
But UiPath seems to start from the begining of the Output Data Table whenever I called for opCVdatatable.
Inshort, each desired Input fileds are iteratively getting filled up by all the data with the data stored in the Output Data Table.
Can someone help me out please?
My first recommendation is to use Workbook: Read range activity to read data from Excel because it is quicker, works in the background, and does not require excel to be installed on the system.
Start your sequence like this (note the add headers property is not checked):
You do not need to use Output Data Table because this activity outputs a string containing all row items. What you want to do instead is to access the items in the data table and output each one as a string in your type into, e.g., CVDatatable.Rows(0).Item(0).ToString, like so:
You mention you want to read the text opCVdatatable in an iteration and write them into text fields. This is a little bit more complex, but i'll give you an example. You can use a For Each Row activity and loop through each row in CVDatatable, setting the index property if required. See below:
The challenge is to get the selector correct here and make it dynamic, so that it targets a different text field per iteration. The selector for the type into activity will depend on the system you are targeting, but here is an example:
And the selector for this:
Also, here is a working XAML file for you to test.
Hope this helps.
Chris
Here's a different, more general approach. Instead of including the target in the process itself, the Excel would be modified to include parts of a selector:
Note that column B now contains an identifier, and this ID depends on the application you will be working with. For example, here's my sample app looks like. As you can see, the first text box has an id of 585, the second one is 586, and so on (note that you can work with any kind of identifier including the control's name if exposed to UiPath):
Now, instead of adding multiple Type Into elements to your workflow, you would add just a single one, loop over each of the datatable's row, and then create a dynamic selector:
In my case the selector for the Type Into activity looks as follows:
"<wnd cls='#32770' title='General' /><wnd ctrlid='" + row(1).ToString() + "' />"
This will allow you to maintain the process from the Excel sheet alone - if there's a new field that needs to be mapped, just add it to your sheet. No changes to the Workflow are required.
I am currently getting files from FTP in Nifi, but I have to check some conditions before I fetch the file. The scenario goes some thing like this.
List FTP -> Check Condition -> Fetch FTP
In the Check Condition part, I have fetch some values from DB and compare with the file name. So can I use update attribute to fetch some records from DB and make it like this?
List FTP -> Update Attribute (from DB) -> Route on Attribute -> Fetch FTP
I think your flow looks something like below
Flow:
1.ListFTP //to list the files
2.ExecuteSQL //to execute query in db(sample query:select max(timestamp) db_time from table)
3.ConvertAvroToJson //convert the result of executesql to json format
4.EvaluateJsonPath //keep destination as FlowfileAttribute and add new property as db_time as $.db_time
5.ROuteOnAttribute //perform check filename timestamp vs extracted timestamp by using nifi expresson language
6.FetchFile //if condition is true then fetch the file
RouteOnAttribute Configs:
I have assumed filename is something like fn_2017-08-2012:09:10 and executesql has returned 2017-08-2012:08:10
Expression:
${filename:substringAfter('_'):toDate("yyyy-MM-ddHH:mm:ss"):toNumber()
:gt(${db_time:toDate("yyyy-MM-ddHH:mm:ss"):toNumber()})}
By using above expression we are having filename value same as ListFTP filename and db_time attribute is added by using EvaluateJsonPath processor and we are changing the time stamp to number then comparing.
Refer to this link for more details regards to NiFi expression language.
So if I understand your use case correctly, it is like you are using the external DB only for tracking purpose. So I guess only the latest processed timestamp is enough. In that case, I would suggest you to use DistributedCache processors and ControllerServices offered by NiFi instead of relying on an external DB.
With this method, your flow would be like:
ListFile --> FetchDistributedMapCache --(success)--> RouteOnAttribute -> FetchFile
Configure FetchDistributedMapCache
Cache Entry Identifier - This is the key for your Cache. Set it to something like lastProcessedTime
Put Cache Value In Attribute - Whatever name you give here will be added as a FlowFile attribute with its value being the Cache value. Provide a name, like latestTimestamp or lastProcessedTime
Configure RouteOnAttribute
Create a new dynamic relationship by clicking the (+) button in the Properties tab. Give it a name, like success or matches. Let's assume, your filenames are of the format somefile_1534824139 i.e. it has a name and an _ and the epoch timestamp appended.
In such case, you can leverage NiFi Expression Language and make use of the functions it offer. So for the new dynamic relation, you can have an expression like:
success - ${filename:substringAfter('_'):gt(${lastProcessedTimestamp})}
This is with the assumption that, in FetchDistributedMapCache, you have configured the property Put Cache Value In Attribute with the value lastProcessedTimestamp.
Useful Links
https://community.hortonworks.com/questions/83118/how-to-put-data-in-putdistributedmapcache.html
https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#dates
I am newbie in Oracle Forms. I have stored a PDF file in an Oracle database; now I want to read that PDF file and display the content in text field in Oracle Forms.
How should I go about doing this?
Oracle Forms cannot natively display a PDF. If you are storing the actual contents of the PDF in the database, you can look into developing a PJC that leverages an existing Open Source PDF presentation layer, and embed it in the Oracle Form. You would then need to stream the contents of the database into the PJC, which would be tricky (but not impossible).
Your better bet would be to build a small PL/SQL package that can be accessed from a DAD to serve up the document, and fire off a web.show_document call to the URL from Oracle Forms.
There is no built-in to do this.
But if you dont have to use the content of the PDF anywhere
i.e you just want to see the contents of the PDF then you may try this : ( **webutil utility required)
vboolean := webutil_file_transfer.DB_To_Client_With_Progress
( 'D:\files\abc.pdf' , --location of the file with extension
'table_nm', --table name
'field_nm', --field which contains ur PDF
'sr_no=1' , --fetch the PDF of row where sr_no =1
'Downloading from Database',
'Wait to Complete');
client_host('cmd /c start '||vfilename1); --open the file
If you want to make it generic, you can store the extention, append it to the file_nm i.e
1st parameter of DB_To_Client_With_Progress(), then you will be able to open any type of document stored in the database!
How to view a BLOB data, can i export it to text file? I am using Oracle SQL developer 5.1. When i tried
select utl_raw.cast_to_varchar2(dbms_lob.substr(COLNAME))
from user_settings where <fieldname>=...
It returns the following error: ORA-06502 PL/SQL : numeric or value error : raw variable length too long
The BLOB contains text in XML format.
To view xml data stored as a BLOB, do the following;
Open the table view, with the Data tab selected
Double click on the column field value, and a pencil button should appear in the field. Click the pencil button.
The Edit Value window should open, click checkbox for 'View As: Text'. From this window you can also save out any particular file data you require.
PS: I'm running Oracle SQL Developer version 3.1.05
Cause it over the size of the display field. It needs to set the size
You add 1500 for the substr, it should be work.
select utl_raw.cast_to_varchar2(dbms_lob.substr(colname,1500))
from user_settings where <row_id>=...
BLOB data is typically just... a binary blob of data.
Sure, you can export it to a text file by converting it to some kind of text representation... But what if it is an image?
jaganath: You need to sit down and figure out what it is you're dealing with, and then find out what it is you need to do.
You could look at DBMS_LOB.CONVERTTOCLOB
But if it is XML, why store it in a BLOB rather than an XMLType (or CLOB)
From the error message it seems that the blob length is too long to fit into a varchar. You could do the conversion in your application code and write the XML into a String or file.