I have a table that has a field that contains a comma separated list of order IDs. What I am trying to do is output each of those IDs separately so that I can use them to look up their corresponding order details in another table.
Does anyone know of a good way to do this?
So far I have tried:
<#data Alerts_test_table as alerts_test>
<#filter
CUSTOMER_ID_=CONTACTS_LIST.CUSTOMER_ID_1>
<#fields AD_ID_LIST>
<#assign seq = ['${alerts_test.AD_ID_LIST}']>
<#list seq?chunk(1) as row><#list row as cell>
${cell}
</#list> </#list>
</#data>
But this just outputs it as a line of text.
Let's say you have comma separated ID-s in idsString, and you want to iterate though the ID-s one by one. Then:
Data-model (with http://try.freemarker.org/ syntax):
idsString = "11, 22, 33"
Template:
<#list idsString?split(r'\s*,\s*', 'r') as idString>
${idString}
</#list>
However, in the template you have posted I see many strange things, so some ideas/pointers:
Be sure that alerts_test.AD_ID_LIST is indeed a String, not a List that you can list directly, without ?split-ing. If it's a String, then '${alerts_test.AD_ID_LIST}' is ultimately the same as alerts_test.AD_ID_LIST. In general, you never need an empty string literal and an ${} in it. It's not useful (but sometimes harmful, as it converts non-string values to string).
?chunk(1) is not useful. The point of ?chunk is to slice a list to smaller lists, but if those smaller lists are to be 1 long, then you might as well just list the items of the original list.
There are no such directives as #data and #filter. Is this some forked FreeMarker version?
If the ID is a number that you need to pass to some API that expects a number, then you will have to convert it to number like idString?number.
Related
I have a long date list with corresponding price values.
out of this list I need the last price of every month.
I sliced the list and tried to get the last value but cant!
please help.
here is part of the list:
here is my code:
<#assign new_list><#list reportData.daily as list><#if list?string('M')?number=6 && list?string('Y')?number=2017>${list?string("dd/MM/yyyy")}</#if></#list></#assign>${new_list?last}
if I remove the "?last" it will show me a sublist for all dates for June-2017 but once i add "?last" i get:
For "?last" left-hand operand: Expected a sequence, but this has evaluated to a string (wrapper: f.t.SimpleScalar):
You get that error because <#assign new_list>...</#assign> captures the raw output printed, which is not a sequence. If you really have to do this in template, put an #if inside the #list which only prints the current item if the next one is in a different month. (You can peek at the next item inside <#list xs as x>...</#list> like xs[x?index + 1]) But, you aren't supposed to do such data processing in a template...
I have a text file with two columns. The values in the first column ("key") are all different, the values in the second column - these strings have a length between 10 and approximately 200 - have some duplicates. The number of duplicates varies. Some strings - especially the longer ones - don't have any duplicate, while others might have 20 duplicate occurancies.
key1 valueX
key2 valueY
key3 valueX
key4 valueZ
I would like to represent this data as a hash. Because of the large number of keys and the existence of duplicate values, I am wondering, whether some method of sharing common strings would be helpful.
The data in the file is kind of "constant", i.e. I can put effort (in time of space) to preprocess it in a suitable way, as long as it is accessed efficiently, once it is entered my application.
I will now outline an algorithm, where I believe this would solve the problem. My question is, whether the algorithm is sound, respectively whether it could be improved. Also, I would like to know whether using freeze on the strings would provide an additional optimization:
In a separated preprocessing process, I find out which strings values are indeed duplicate, and I annotate the data accordingly (i.e. create a third column in the file), in that all occurances of a repeated string except the first occurance, have a pointer to the first occurance:
key1 valueX
key2 valueY
key3 valueX key1
key4 valueZ
When I read in my application the data into memory (line by line), I use this annotation, to create a pointer to the original string, instead of allocating a new one:
if columns.size == 3
myHash[columns[0]] = columns[1] # First occurance of the string
else
myHash[columns[0]] = myHash[columns[2]].dup # Subsequent occurances
end
Will this achieve my goal? Can it be done any better?
One way you could do this is using symbols.
["a", "b", "c", "a", "d", "c"].each do |c|
puts c.intern.object_id
end
417768 #a
313128 #b
312328 #c
417768 #a
433128 #d
312328 #c
Note how c got the same value.
You can turn a string into a symbol with the intern method. If you intern an equal string you should get the same symbol out, like a flyweight pattern.
If you save the symbol in your hash you'll just have each string a single time. When it's time to use the symbol just call .to_s on the symbol and you'll get the string back. (Not sure how the to_s works, it may do creation work on each call.) Another idea would be to cache strings your self, ie have an integer to string cache hash and just put the integer key in your data structures. When you need the string you can look it up.
I am writing a script to convert JSON data to an ordered CSV spreadsheet.
The JSON data itself does not necessarily contain all keys (some fields in the spreadsheet should say "NA").
Typical JSON data looks like this:
json = {"ReferringUrl":"N","PubEndDate":"2010/05/30","ItmId":"347628959","ParentItemId":"46999"}
I have a list of the keys found in each column of the spreadsheet:
keys = ["ReferringUrl", "PubEndDate", "ItmId", "ParentItemId", "OtherKey", "Etc"]
My thought was that I could iterate through each line of JSON like this:
parsed = JSON.parse(json)
result = (0..keys.length).map{ |i| parsed[keys[i]] || 'NA'} #add values associated with keys to an array, using NA if no value is present
CSV.open('file.csv', 'wb') do |csv|
csv << keys #create headings on spreadsheet
csv << result #load data associated with headings into the next line
end
Ideally, this would create a CSV file with the proper information in the proper order in a spreadsheet. However, what happens is the result data comes in completely out of order, and contains an extra column that I don't know what to do with.
Looking at the actual data, since there are actually about 100 keys and most of the fields contain NA, it is very difficult to determine what is happening.
Any advice?
The extra column comes from 0..keys.length which includes the end of the range. The last value of result is going to be parsed[keys[keys.length]] i.e. parsed[nil] i.e. nil. You can avoid that entirely by mapping keys directly
result = keys.map { |key| parsed.fetch(key, 'NA') }
As for the random order of the values, I suspect you aren't giving us all of the relevant information, because I tested your code and the result came out in the same order as keys.
Range has two possible notations
..
and
...
... is exclusive, meaning the range (A...B) would be not include B.
Change to
result = (0...keys.length).map{ |i| parsed[keys[i]] || 'NA'} #add values associated with keys to an array, using NA if no value is present
And see if that prevents the last value in that range from evaluating to nil.
Sorry for this extreme beginner question. I have a string variable originaltext containing some multiline text. I can convert it into an array of lines like so:
lines = originaltext.split("\n");
But I need help sorting this array. This DOES NOT work:
lines.sort;
The array remains unsorted.
And an associated question. Assuming I can sort my array somehow, how do I then convert it back to a single variable with no separators?
Your only issue is a small one - sort is actually a method, so you need to call lines.sort(). In order to join the elements together, you can use the join() method:
var originaltext = "This\n\is\na\nline";
lines = originaltext.split("\n");
lines.sort();
joined = lines.join("");
I was wondering if anyone had any advice on parsing a file with fixed length records in Ruby. The file has several sections, each section has a header, n data elements and a footer. For example (This is total nonsense - but has roughly similar content)
1923 000-230SomeHeader 0303030
209231-231992395 MoreData
293894-329899834 SomeData
298342-323423409 OtherData
3 3423942Footer record 9832422
Headers, Footers and Data rows each begin with a specific number (1,2 & 3) in this example.
I have looked at http://rubyforge.org/projects/file-formatter/ and it looks good - except that the documentation is light and I can't see how to have n data elements.
Cheers,
Dan
There are a number of ways to do this. The unpack method of string could be used to define a pattern of fields as follows :-
"209231-231992395 MoreData".unpack('aa5A1A9a4Z*')
This returns an array as follows :-
["2", "09231", "-", "231992395", " ", "MoreData"]
See the documentation for a description of the pack/unpack format.
Several options exist as usual.
If you want to do it manually I would suggest something like this:
very pseudo-code:
Read file
while lines in file
handle_line(line)
end
def handle_line
type=first_char
parse_line(type)
end
def parse_line
split into elements and do_whatever_to_them
end
Splitting the line into elements of fixed with can be done with for instance unpack()
irb(main):001:0> line="1923 000-230SomeHeader 0303030"
=> "1923 000-230SomeHeader 0303030"
irb(main):002:0* list=line.unpack("A1A5A7a15A10")
=> ["1", "923", "000-230", "SomeHeader ", "0303030"]
irb(main):003:0>
The pattern used for unpack() will vary with field lengths on the different kinds of records and the code will depend on wether you want trailing spaces and such. See unpack reference for details.