how to create a set in terraform - set

I am trying to create a set to use as an argument in the setproduct function in terraform. When I try:
toset([a,b,c])
I get an error saying I can't convert a tuple to a list. I've tried various things like using tolist and ... and just having one pair of () braces in various places but I still can't get this to work - would anyone know how I can create a set from a,b and c?

set must have elements of the same type. Thus it can be:
# set of strings
toset(["a", "b", "c"])
# set of numbers
toset([1, 2, 3])
# set of lists
toset([["b"], ["c", 4], [3,3]])
You can't mix types, so the error you are getting is because your are mixing types, e.g. list and number
# will not work because different types
toset([["b"], ["c", 4], 3])

Related

How to check if a property value is a list in Memgraph?

I am using Memgraph Platform 2.6.5 and I want to check whether the property of a node is a list. I see there is a type function for relationships but would be good to have some kind of data type filtering for properties. I tried using the size function but it also works on strings and paths so it can't tell if a property is a list or not. Any idea on how to do that?
Here is an example of a small trick on how to do that:
If you have a node with list and string properties:
CREATE (n:Node {list: [1, 2, 3, 4, 5, 6, 7, 8], str: "myString"});
And then run:
MATCH (n:Node)
RETURN size(n.list + 11) = size(n.list)+1 AS isList;
I get true.
Otherwise, if I run this:
MATCH (n:Node)
RETURN size(n.str + 11) = size(n.str)+1 AS isList;
I get false.
This is because if you add a number consisting of two chars to a string, you get string which size has increased by 2. But, if you add the same number to a list, list size increased by 1.
Besides that, you can always create a custom procedure in Memgraph to extend Cypher query language. These procedures can be written in Python, C/C++ or Rust, and here is a how-to guide.

method cascading is possible here?

I have a three lines of code here like shown below
local = headers.zip(*data_rows).transpose
local = local[1..-1].map {|dataRow| local[0].zip(dataRow).to_h}
p local
Now if you watch the above three lines, I have to store the result of the first line in the variable called local since it would be used in two places in the second line as I have shown,So Can't I cascade the second line with first line anyway? I tried using tap like this
local = headers.zip(*data_rows).transpose.tap{|h|h[1..-1].map {|dataRow| h[0].zip(dataRow).to_h}}
tap is returning the self as explained in the document so can't I get the result final result when I use tab? Anyway other way to achieve this result in one single line so that I don't have to use local variable?
If you're on Ruby 2.5.0 or later, you can use yield_self for this.
local = headers.zip(*data_rows).transpose.yield_self { |h| h[1..-1].map { |dataRow| h[0].zip(dataRow).to_h } }
yield_self is similar to tap in that they both yield self to the block. The difference is in what is returned by each of the two methods.
Object#tap yields self to the block and then returns self. Kernel#yield_self yields self to the block and then returns the result of the block.
Here's an answer to a previous question where I gave a couple of further examples of where each of these method can be useful.
It's often helpful to execute working code with data, to better understand what is to be computed. Seeing transpose and zip, which are often interchangeable, used together, was a clue that a simplification might be possible (a = [1,2,3]; b = [4,5,6]; a.zip(b) => [[1, 4], [2, 5], [3, 6]] <= [a,b].transpose).
Here's my data:
headers=[1,2,3]
data_rows=[[11,12,13],[21,22,23],[31,32,33],[41,42,43]]
and here's what the working code returns:
local = headers.zip(*data_rows).transpose
local[1..-1].map {|dataRow| local[0].zip(dataRow).to_h}
#=> [{1=>11, 2=>12, 3=>13}, {1=>21, 2=>22, 3=>23},
# {1=>31, 2=>32, 3=>33}, {1=>41, 2=>42, 3=>43}]
It would seem that this might be computed more simply:
data_rows.map { |row| headers.zip(row).to_h }
#=> [{1=>11, 2=>12, 3=>13}, {1=>21, 2=>22, 3=>23},
# {1=>31, 2=>32, 3=>33}, {1=>41, 2=>42, 3=>43}]

JSONata prevent array flattening

Q: How do I prevent JSONata from "auto-flattening" arrays in an array constructor?
Given JSON data:
{
"w" : true,
"x":["a", "b"],
"y":[1, 2, 3],
"z": 9
}
the JSONata query seems to select 4 values:
[$.w, $.x, $.y, $.z]
The nested arrays at $.x and $.y are getting flattened/inlined into my outer wrapper, resulting in more than 4 values:
[ true, "a", "b", 1, 2, 3, 9 ]
The results I would like to achieve are
[ true, ["a", "b"], [1, 2, 3], 9 ]
I can achieve this by using
[$.w, [$.x], [$.y], $.z]
But this requires me to know a priori that $.x and $.y are arrays.
I would like to select 4 values and have the resulting array contain exactly 4 values, independent of the types of values that are selected.
There are clearly some things about the interactions between JSONata sequences and arrays that I can't get my head around.
In common with XPath/XQuery sequences, it will flatten the results of a path expression into the output array. It is possible to avoid this in your example by using the $each higher-order function to iterate over an object's key/value pairs. The following expression will get what you want without any flattening of results:
$each($, function($v) {
$v
})
This just returns the value for each property in the object.
UPDATE: Extending this answer for your updated question:
I think this is related to a previous github question on how to combine several independent queries into the same question. This uses an object to hold all the queries in a similar manner to the one you arrived at. Perhaps a slightly clearer expression would be this:
{
"1": t,
"2": u.i,
"3": u.j,
"4": u.k,
"5": u.l,
"6": v
} ~> $each(λ($v){$v})
The λ is just a shorthand for function, if you can find it on your keyboard (F12 in the JSONata Exerciser).
I am struggling to rephrase my question in such as way as to describe the difficulties I am having with JSONata's sequence-like treatment of arrays.
I need to run several queries to extract several values from the same JSON tree. I would like to construct one JSONata query expression which extracts n data items (or runs n subqueries) and returns exactly n values in an ordered array.
This example seems to query request 6 values, but because of array flattening the result array does not have 6 values.
This example explicitly wraps each query in an array constructor so that the result has 6 values. However, the values which are not arrays are wrapped in an extraneous & undesirable array. In addition one cannot determine what the original type was ...
This example shows the result that I am trying to accomplish ... I asked for 6 things and I got 6 values back. However, I must know the datatypes of the values I am fetching and explicitly wrap the arrays in an array constructor to work-around the sequence flattening.
This example shows what I want. I queried 6 things and got back 6 answers without knowing the datatypes. But I have to introduce an object as a temporary container in order to work around the array flattening behavior.
I have not found any predicates that allow me to test the type of a value in a query ... which might have let me use the ?: operator to dynamically decide whether or not to wrap arrays in an array constructor. e.g. $isArray($.foo) ? [$.foo] : $.foo
Q: Is there an easier way for me to (effectively) submit 6 "path" queries and get back 6 values in an ordered array without knowing the data types of the values I am querying?
Building on the example from Acoleman, here is a way to pass in n "query" strings (that represent paths):
(['t', 'u.i', 'u.j', 'u.k', 'u.l', 'v'] {
$: $eval('$$.' & $)
}).$each(function($o) {$o})
and get back an array ofn results with their original data format:
[
12345,
[
"i",
"ii",
"iii"
],
[],
"K",
{
"L": "LL"
},
null
]
It seems that using $each is the only way to avoid any flattening...
Granted, probably not the most efficient of expressions, since each has to be evaluated from a path string starting at the root of the data structure -- but there ya go.

Can I count on partition preserving order?

Say I have a sorted Array, such as this:
myArray = [1, 2, 3, 4, 5, 6]
Suppose I call Enumerable#partition on it:
p myArray.partition(&:odd?)
Must the output always be the following?
[[1, 3, 5], [2, 4, 6]]
The documentation doesn't state this; this is what it says:
partition { |obj| block } → [ true_array, false_array ]
partition → an_enumerator
Returns two arrays, the first containing the elements of enum for which the block evaluates to true, the second containing the rest.
If no block is given, an enumerator is returned instead.
But it seems logical to assume partition works this way.
Through testing Matz's interpreter, it appears to be the case that the output works like this, and it makes full sense for it to be like this. However, can I count on partition working this way regardless of the Ruby version or interpreter?
Note: I made implementation-agnostic because I couldn't find any other tag that describes my concern. Feel free to change the tag to something better if you know about it.
No, you can't rely on the order. The reason is parallelism.
A traditional serial implementation of partition would loop through each element of the array evaluating the block one at a time in order. As each call to odd returns, it's immediately pushed into the appropriate true or false array.
Now imagine an implementation which takes advantage of multiple CPU cores. It still iterates through the array in order, but each call to odd can return out of order. odd(myArray[2]) might return before odd(myArray[0]) resulting in [[3, 1, 5], [2, 4, 6]].
List processing idioms such as partition which run a list through a function (most of Enumerable) benefit greatly from parallel processing, and most computers these days have multiple cores. I wouldn't be surprised if a future Ruby implementation took advantage of this. The writers of the API documentation for Enumerable likely carefully omitted any mention of process ordering to leave this optimization possibility open.
The documentation makes no explicit mention of this, but judging from the official code, it does retain ordering:
static VALUE
partition_i(RB_BLOCK_CALL_FUNC_ARGLIST(i, arys))
{
struct MEMO *memo = MEMO_CAST(arys);
VALUE ary;
ENUM_WANT_SVALUE();
if (RTEST(enum_yield(argc, i))) {
ary = memo->v1;
}
else {
ary = memo->v2;
}
rb_ary_push(ary, i);
return Qnil;
}
This code gets called from the public interface.
Essentially, the ordering in which your enumerable emits objects gets retained with the above logic.

Changing one array in an array of arrays changes them all; why?

a = Array.new(3,[])
a[1][0] = 5
a => [[5], [5], [5]]
I thought this doesn't make sense!
isn't it should a => [[], [5], []]
or this's sort of Ruby's feature ?
Use this instead:
a = Array.new(3){ [] }
With your code the same object is used for the value of each entry; once you mutate one of the references you see all others change. With the above you instead invoke the block each time a new value is needed, which returns a new array each time.
This is similar in nature to the new user question about why the following does not work as expected:
str.gsub /(<([a-z]+)>/, "-->#{$1}<--"
In the above, string interpolation occurs before the gsub method is ever called, so it cannot use the then-current value of $1 in your string. Similarly, in your question you create an object and pass it to Array.new before Ruby starts creating array slots. Yes, the runtime could call dup on the item by default…but that would be potentially disastrous and slow. Hence you get the block form to determine on your own how to create the initial values.

Resources