how to use macro in ftl - freemarker

I have full of confusion in implementing the macro and functions in ftl.
can any one please add some useful information.
and what is the difference between macro and function in ftl
Thanks

The difference between macros and functions: macros are for generating markup (or other long text) and for flow-control and side-effects in general. Functions are for calculating other kind of values, including short plain text, and usually has no side-effects. These are reflected by that fact that macros has no return value, they just directly print to the output. Also the output of macros is not escaped by #escape. That's also why they look similar to HTML tags, while ${myFunction()} doesn't.
Other than that, what are you confused about? I assume you have found the FreeMarker Manual.

Below is the answer how to use macros in FTL :)
Input-smooks.Json:
{
"title": "Payment Received",
"firstName": "vijay",
"lastName": "dwivedi",
"accountId": "123",
"paymentId": "456",
"accounts": [
{
"accountId": "1111",
"paymentId": "1112"
},
{
"accountId": "2111",
"paymentId": "2112"
},
{
"accountId": "3111",
"paymentId": "3112"
}
]
}
Smook-config.xml file:
Define macros once and use as function whenever required
<smooks-resource-list xmlns="http://www.milyn.org/xsd/smooks-1.1.xsd"
xmlns:json="http://www.milyn.org/xsd/smooks/json-1.1.xsd" xmlns:ftl="http://www.milyn.org/xsd/smooks/freemarker-1.1.xsd">
<params>
<param name="stream.filter.type">SAX</param>
<param name="default.serialization.on">false</param>
</params>
<json:reader rootName="json" keyWhitspaceReplacement="_">
<json:keyMap>
<json:key from="date&time" to="date_and_time" />
</json:keyMap>
</json:reader>
<resource-config selector="json">
<resource>org.milyn.delivery.DomModelCreator</resource>
</resource-config>
<ftl:freemarker applyOnElement="json">
<ftl:template>
<!--
<#macro PopulateTasks task_list>
<#list task_list as att1>
"accountId": "${att1.accountId}"
"paymentId": "${att1.paymentId}"
</#list>
</#macro>
<#PopulateTasks json.accounts.element/>
-->
</ftl:template>
</ftl:freemarker>
</smooks-resource-list>
public static void main(String[] args) throws SmooksException, IOException, SAXException {
long start = System.currentTimeMillis();
Smooks smooks = new Smooks("src/main/resources/smooks-config.xml");
try {
smooks.filterSource(new StreamSource(new
FileInputStream("src/main/resources/input_smooks.json")), new
StreamResult(System.out));
} finally {
smooks.close();
}
}
<#PopulateTasks json.accounts.element/> this is the way to call macros

Related

How do I use FreeFormTextRecordSetWriter

I my Nifi controller I want to configure the FreeFormTextRecordSetWriter, but I have no Idea what I should put in the "Text" field. I'm getting the text from my source (in my case GetSolr), and just want to write this, period.
Documentation and mailinglist do not seem to tell me how this is done, any help appreciated.
EDIT: Here the sample input + output I want to achieve (as you can see: not ransformation needed, plain text, no JSON input)
EDIT: I now realize, that I can't tell GetSolr to return just CSV data - but I have to use Json
So referencing with attribute seems to be fine. What the documentation omits is, that the ${flowFile} attribute should containt the complete flowfile that is returned.
Sample input:
{
"responseHeader": {
"zkConnected": true,
"status": 0,
"QTime": 0,
"params": {
"q": "*:*",
"_": "1553686715465"
}
},
"response": {
"numFound": 3194,
"start": 0,
"docs": [
{
"id": "{402EBE69-0000-CD1D-8FFF-D07756271B4E}",
"MimeType": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"FileName": "Test.docx",
"DateLastModified": "2019-03-27T08:05:00.103Z",
"_version_": 1629145864291221504,
"LAST_UPDATE": "2019-03-27T08:16:08.451Z"
}
]
}
}
Wanted output
{402EBE69-0000-CD1D-8FFF-D07756271B4E}
BTW: The documentation says this:
The text to use when writing the results. This property will evaluate the Expression Language using any of the fields available in a Record.
Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)
I want to use my source's text, so I'm confused
You need to use expression language as if the record's fields are the FlowFile's attributes.
Example:
Input:
{
"t1": "test",
"t2": "ttt",
"hello": true,
"testN": 1
}
Text property in FreeFormTextRecordSetWriter:
${t1} k!${t2} ${hello}:boolean
${testN}Num
Output(using ConvertRecord):
test k!ttt true:boolean
1Num
EDIT:
Seems like what you needed was reading from Solr and write a single column csv. You need to use CSVRecordSetWriter. As for the same,
I should tell you to consider to upgrade to 1.9.1. Starting from 1.9.0, the schema can be inferred for you.
otherwise, you can set Schema Access Strategy as Use 'Schema Text' Property
then, use the following schema in Schema Text
{
"name": "MyClass",
"type": "record",
"namespace": "com.acme.avro",
"fields": [
{
"name": "id",
"type": "int"
}
]
}
this should work
I'll edit it into my answer. If it works for you, please choose my answer :)

A way to colorize the #regions in VS Code C#

I'm trying to get as comfortable as possible with this new IDE (coming from Visual Studio Community for Windows).
I used a very specific color theme that allowed me to understand the parts of the code at a glance. With VS Code though, it's more complicated for me as there aren't many options in the:
Settings > editor.tokenColorCustomizations.
Is there a way to colorize the #region pre-processor directives with a specific color?
Thanks.
Yes, it seems it has a distinct scope name (keyword.preprocessor.region), allowing you to target it with the setting, as the Developer: Show TM Scopes command shows:
"editor.tokenColorCustomizations": {
"textMateRules": [
{
"scope": "keyword.preprocessor.region.cs",
"settings": {
"foreground": "#FF0000"
}
},
{
"scope": "keyword.preprocessor.endregion.cs",
"settings": {
"foreground": "#FF0000"
}
}
]
}
It seems the scope includes neither the # nor the string though:

TextMate scope for triple quoted Python docstrings

I'm currently setting up VS Code for Python development. I'd like to have triple-quoted docstrings highlighted as comments, not as strings, i.e. grey instead of light green in this picture:
I know that I can adjust this in the TextMate rules for this theme, but I can't figure out the right scope for Python docstrings. I thought I would be something like this:
"editor.tokenColorCustomizations": {
"[Predawn]": {
"comments": "#777777",
"textMateRules": [
{
"scope": "string.quoted.triple",
"settings": {
"foreground": "#777777"
}
}
]
},
}
but that does not have the desired effect, even after restarting the editor. Does anyone know what the right scope is?
Just to expand on the comments above, the scopes are:
For docstrings: string.quoted.docstring.multi.python for """ ''' (or .single for ' ")
For triple quote strings that are not docstrings: string.quoted.multi.python
The scope string.quoted.triple is not used, even though it appears in settings.json autocomplete.
Try using this one
"editor.tokenColorCustomizations": {
"textMateRules": [
{
"scope": [
"string.quoted.multi.python",
"string.quoted.double.block.python",
"string.quoted.triple",
"string.quoted.docstring.multi.python",
"string.quoted.docstring.multi.python punctuation.definition.string.begin.python",
"string.quoted.docstring.multi.python punctuation.definition.string.end.python",
"string.quoted.docstring.multi.python constant.character.escape.python"
],
"settings": {
"foreground": "#777777" //change to your preference
}
}
]

How to get name/confidence individually from classify_text?

Most of the other methods in the language api, such as analyze_syntax, analyze_sentiment etc, have the ability to return the constituent elements like
sentiment.score
sentiment.magnitude
token.part_of_speech.tag
etc etc etc....
but I have not found a way to return name and confidence in isolation from classify_text. It doesn't look like it's possible but that seems weird. Am missing something? Thanks
The language.documents.classifyText method returns a ClassificationCategory object which contains name and confidence. If you only want one of the fields you can filter by categories/name or categories/confidence. As an example I executed:
POST https://language.googleapis.com/v1/documents:classifyText?fields=categories%2Fname&key={YOUR_API_KEY}
{
"document": {
"content": "this is a test for a StackOverflow question. I get an error because I need more words in the document and I don't know what else to say",
"type": "PLAIN_TEXT"
}
}
Which returns:
{
"categories": [
{
"name": "/Science/Computer Science"
},
{
"name": "/Computers & Electronics/Programming"
},
{
"name": "/Jobs & Education"
}
]
}
Direct link to API explorer for interactive testing of my example (change content, filters, etc.)

Changing bags into arrays in Pig Latin

I'm doing some transformations on some data set and need to publish to a sane looking format. Current my final set looks like this when I run describe:
{memberId: long,companyIds: {(subsidiary: long)}}
I need it to look like this:
{memberId: long,companyIds: [long] }
where companyIds is the key to an array of ids of type long?
I'm really struggling with how to manipulate things in this way? Any ideas? I've tried using FLATTEN and other commands to know avail. I'm using AvroStorage to write the files into this schema:
The field schema I need to write this data to looks like this:
"fields": [
{ "name": "memberId", "type": "long"},
{ "name": "companyIds", "type": {"type": "array", "items": "int"}}
]
There is no array type in PIG (http://pig.apache.org/docs/r0.10.0/basic.html#data-types). However, if all you need is a good looking output and if you don't have too many elements in companyIds, you may want to write a simple UDF that converts the bag into a nice formatted string.
Java code
public class BagToString extends EvalFunc<String>
{
#Override
public String exec(Tuple input) throws IOException
{
List<String> strings = new ArrayList<String>();
DataBag bag = (DataBag) input.get(0);
if (bag.size() == 0) {
return null;
}
for (Iterator<Tuple> it = bag.iterator(); it.hasNext();) {
Tuple t = it.next();
strings.add(t.get(0).toString());
}
return StringUtils.join(strings, ":");
}
}
PIG script
foo = foreach bar generate memberId, BagToString(companyIds);
I know this is a bit old, but I recently ran into the same problem.
Based on the avrostorage documentation, using the latest version of pig and avrostorage, it is possible to directly cast bag to avro array.
In your case, you may want something like:
STORE blah INTO 'blah' USING AvroStorage('schema','{your schema}');
where the array field in the schema is
{
"name":"companyIds",
"type":[
"null",
{
"type":"array",
"items":"long"
}
],
"doc":"company ids"
}

Resources